report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
|---|---|
The federal government has made substantial progress in financial management. If I were to summarize in just a few words the environment in 2007 as compared to prior to enactment of key financial management laws, financial management has gone from the backroom to the boardroom. There has been a cultural change in how financial management is viewed and carried out in the agencies and a recognition of the value and need for good financial management throughout government, which was not the case in 1990 when the Congress passed the CFO Act. Financial management systems and internal control have been strengthened. Generally accepted government accounting standards have been developed. For fiscal year 2006, 19 of 24 CFO Act agencies received clean audit opinions on their financial statements, up from just 6 for fiscal year 1996. While there has been marked progress in federal financial management, a number of challenges still remain, including transforming financial management and business practices at DOD, modernizing financial management systems, and building a financial management workforce for the future. Fully meeting these challenges will enable the federal government to provide the world-class financial management anticipated by the CFO Act and other management reform legislation. First, I would like to briefly highlight the legislative framework that governs federal financial management. The Congress has long recognized the importance of the federal government implementing strong financial management practices. Towards this end, the Congress has passed a series of management reform legislation aimed at improving and providing a strong foundation for federal financial management. This series of legislation started with the Federal Managers’ Financial Integrity Act of 1982 (FMFIA), which the Congress passed to strengthen internal control and accounting systems throughout the federal government, among other purposes. In accordance with FMFIA, GAO has issued Standards for Internal Control in the Federal Government, which provides the standards that are directed at helping agency managers implement effective internal control, an integral part of improving financial management systems. While agencies had achieved some early success in identifying and correcting material internal control and accounting system weaknesses, their efforts to implement FMFIA had not produced the intended results. Therefore, the Congress passed additional management reform legislation to improve the general and financial management of the federal government. This legislation includes the (1) CFO Act of 1990, (2) Government Performance and Results Act of 1993 (GPRA), (3) Government Management Reform Act of 1994 (GMRA), (4) Federal Financial Management Improvement Act of 1996 (FFMIA), (5) Clinger- Cohen Act of 1996, (6) Accountability of Tax Dollars Act of 2002 (ATDA), and (7) Improper Payments Information Act of 2002 (IPIA). The CFO Act is the most comprehensive and far-reaching financial management improvement act since the Budget and Accounting Procedures Act of 1950. The CFO Act established a leadership structure, provided for long-range planning, required audited financial statements and modern financial systems, and strengthened accountability reporting for certain agencies. Three years later, the Congress enacted GPRA, which required certain agencies to develop strategic plans, set performance goals, and report annually on actual performance compared to goals. GPRA’s emphasis on performance management complements the concepts in the CFO Act. GPRA was followed by GMRA, which made permanent the pilot program in the CFO Act for annual audited agency- level financial statements, expanded this requirement to all CFO Act agencies, and established a requirement for the preparation and audit of governmentwide consolidated financial statements. In 1996, FFMIA built on the foundation laid by the CFO Act by reflecting the need for CFO Act agencies to have systems that can generate reliable, useful, and timely information with which to make fully informed decisions and to ensure accountability on an ongoing basis. The Clinger-Cohen Act of 1996 (also known as the Information Technology Management Reform Act of 1996) sets forth a variety of initiatives to support better decision making for capital investments in information technology, which has led to the development of the Federal Enterprise Architecture and better-informed capital investment and control processes within agencies and across government. ATDA required most executive agencies that were not otherwise required by statute or exempted by OMB, to prepare annual audited financial statements and to submit such statements to the Congress and the Director of OMB. Finally, IPIA has increased visibility over improper payments by requiring executive agency heads, based on guidance from the OMB, to identify programs and activities susceptible to significant improper payments, estimate amounts improperly paid, and report on the amounts of improper payments and their actions to reduce them. The combination of reforms ushered in by these laws, if successfully implemented, provides a solid foundation to improve the accountability of government programs and operations as well as to routinely produce valuable cost and operating performance information. The five key financial management improvements that we have noted from a governmentwide perspective are as follows. Achieving Cultural Change—We have seen true cultural change in how financial management is viewed. This has been accomplished through a lot of hard work by OMB and the agencies and continued strong support and oversight by the Congress. At the top level, federal financial management reform has gained momentum through the committed support of top federal leaders. For example, improved financial performance is one of the governmentwide initiatives in the President’s Management Agenda (PMA). Under this initiative, agency CFOs share responsibility—both individually and through the efforts of the CFO Council—for improving the financial performance of the government. The Executive Branch Management Scorecard, developed as part of the PMA, has been an effective tool to monitor progress and help drive much needed improvements. Establishing a Governmentwide Leadership Structure—The Joint Financial Management Improvement Program (JFMIP) Principals— the Secretary of the Treasury, the Director of OMB, the Director of OPM, and myself, the Comptroller General—have provided leadership by holding periodic meetings that have resulted in unprecedented substantive deliberations and agreements focused on key reform issues such as improving accounting for and reporting on social insurance, accelerating issuance of audited agency financial statements, and advocating audit committees. GAO has led by example in this regard, by establishing an audit advisory committee to help us in overseeing the effectiveness of our current financial reporting and audit processes. As established by the CFO Act, the Office of Federal Financial Management (OFFM), the OMB organization with governmentwide responsibility for federal financial management for executive agencies, has demonstrated leadership by undertaking a number of initiatives related to improving financial management capabilities ranging from requiring the use of commercial off-the-shelf financial systems to the promotion of cost accounting to improve the availability of management information for decision making. In addition to assessing the status of agencies’ progress in improving financial performance for the PMA, OFFM has also issued bulletins, circulars, and other guidance to provide a broad-based foundation for transforming agencies’ financial management operations. Strengthening Internal Control—In December 2004, OMB revised its Circular No. A-123, Management’s Responsibility for Internal Control, to provide guidance to federal managers on improving the accountability and effectiveness of federal programs and operations by establishing, assessing, correcting, and reporting on management controls. Requiring federal managers, at the executive level, to focus on internal control demonstrates a renewed emphasis on identifying and addressing internal control weaknesses. As we testified in 2005, many internal control problems have been identified and fixed, especially at the lower levels where internal control assessments were performed and managers could take focused actions to fix relatively simple problems. As a recent case in point, based on our 2006 assessment of high-risk programs, two programs previously designated as high risk, largely due to financial management weaknesses, were removed from the list. Agencies have also made progress in implementing processes and controls to identify, estimate, and reduce improper payments. After passage of IPIA, OMB established Eliminating Improper Payments in 2005 as a new program-specific initiative under the PMA. This separate PMA program initiative was established in this manner to ensure that agency managers are held accountable for meeting the goals of IPIA and are, therefore, dedicating the necessary attention and resources to meeting IPIA requirements. OMB also issued guidance in August 2006 to help clarify and update requirements to support governmentwide IPIA compliance. Improving Financial Management Systems and Operations—Since enactment of financial management reform legislation, federal financial management systems requirements have been developed for the core financial system; managerial cost system; and other administrative and programmatic systems, such as grants, property, revenue, travel, and loans, which are part of an overall financial management system. After the realignment of the JFMIP Program Management Office, OFFM has continued the practice of issuing these requirements. Beginning in 1999, OMB required agencies to purchase commercial off-the-shelf software that had been tested and certified by the federal government against the systems requirements that I just mentioned. With these requirements, the federal government has better defined the functionality needed in its financial management systems, which has helped the vendor community understand federal agencies’ needs. OMB continues to move forward on initiatives that support the PMA with the further development of the financial management line of business to promote leveraging shared service solutions to enhance the government’s performance and services. The financial management line of business initiative is modeled after the consolidation of agencies processing payroll, which were dramatically reduced from 22 to 4 systems. OMB, in conjunction with an interagency task force, estimated that these efforts could save billions of taxpayer dollars. Ultimately, this initiative is expected to (1) reduce the number of systems that each individual agency must support, (2) promote standardization, and (3) reduce the duplication of efforts. Preparing Auditable Financial Statements—Unqualified audit opinions for CFO Act agencies’ financial statements have grown from 6 in fiscal year 1996 to 19 in fiscal year 2006. Improvements in timeliness have been even more dramatic over the years. Agencies were able to issue their audited financial statements within the accelerated reporting time frame—all 24 CFO Act agencies issued their audited financial statements by the November 15, 2006, deadline, set by OMB, just 45 days after the close of the fiscal year. Just a few years ago, most considered this accelerated time frame unrealistic and unachievable. Another definitive example of progress made to date is the establishment of the Federal Accounting Standards Advisory Board (FASAB). In conjunction with the passage of the CFO Act, the OMB Director, Secretary of the Treasury, and the Comptroller General established FASAB to develop accounting standards and principles for the newly required financial statements. The concepts and standards are the basis for OMB’s guidance to agencies on the form and content of their financial statements and for the government’s consolidated financial statements. FASAB is comprised of a 10-member advisory board of 4 knowledgeable individuals from government and 6 nonfederal members selected from the general financial community, the accounting and auditing community, and academia to promulgate proposed accounting standards designed to meet the needs of federal agencies and other users of federal financial information. The mission of FASAB is to develop accounting standards after considering the financial and budgetary information needs of congressional oversight groups, executive agencies, and other users. These accounting and reporting standards are essential for public accountability and for an efficient and effective functioning of our democratic system of government. The standards developed by FASAB have been recognized by the American Institute of Certified Public Accountants as generally accepted accounting standards for federal entities. While there has been marked progress in federal financial management, a number of challenges still remain. The principal challenges remaining are (1) transforming financial management and business practices at DOD, (2) improving financial and performance reporting, (3) modernizing financial management systems, (4) tackling long-standing internal control weaknesses, (5) building a financial management workforce for the future, and (6) strengthening consolidated financial reporting. Fully meeting these challenges will enable the federal government to provide the world-class financial management anticipated by the CFO Act and other management reform legislation. While there continues to be much focus on the agency and governmentwide audit opinions, getting a clean audit opinion, though important in itself, is not the end goal. The end goal is the establishment of a fully functioning CFO operation that includes (1) modern financial management systems that provide reliable, timely, and useful information to support day-to-day decision making and oversight, and for the systematic measurement of performance; (2) sound internal controls that safeguard assets and help ensure proper accountability; and (3) a cadre of highly qualified CFOs and supporting staff. DOD’s long-standing financial and business management difficulties are pervasive, complex, and deeply rooted in virtually all business operations throughout the department. Resolution of these serious problems is essential to improving financial management governmentwide and achieving an opinion on the U.S. government’s consolidated financial statements. Of the 27 areas on GAO’s high-risk list, DOD has 8 of its own high-risk areas and shares responsibility for 7 governmentwide high-risk areas. These weaknesses adversely affect the department’s and the federal government’s ability to control costs; ensure basic accountability; anticipate future costs and claims on the budget; measure performance; maintain funds control; prevent fraud, waste, and abuse; and address pressing management problems. Additionally, the department invests billions of dollars each year to operate, maintain, and modernize its business systems. But despite this significant annual investment, the department has been continually confronted with the difficult task of implementing business systems on time, within budget, and with the promised capability. We also have concerns about the reasonableness, reliability, and transparency of DOD’s budget requests, especially the supplemental budget requests the department has submitted to the Congress in recent years. Reasonableness and reliability are critical factors not only for financial information, but also for budget data. As I testified last year, our prior work found numerous problems with DOD’s processes for recording and reporting costs for the Global War on Terrorism (GWOT), the funding for which has been provided through regular appropriations as well as supplemental appropriations. These problems included long-standing deficiencies in DOD’s financial management systems and business processes, the use of estimates instead of actual cost data, and the lack of adequate supporting documentation. As a result, neither DOD nor the Congress have reliable information on GWOT costs or the use of appropriated funds and also lack historical data useful in considering future funding needs. The nature and severity of DOD’s financial management, business operations, and system deficiencies not only affect financial reporting, but also impede the ability of DOD managers to receive the full range of information needed to effectively manage day-to-day operations. Such weaknesses have adversely affected the ability of DOD to control costs, ensure basic accountability, and prevent fraud. The following examples illustrate DOD’s continuing problems. We found that hundreds of separated battle-injured soldiers were pursued for collection of military debts incurred through no fault of their own, including 74 soldiers whose debts had been reported to credit bureaus, private collection agencies, and the Treasury Offset Program at the time we initiated our audit. Overpayment of pay and allowances (entitlements), pay calculation errors, and erroneous leave payments caused 73 percent of the reported debts. Over the past several years, we have reported on significant pay problems experienced by mobilized Army National Guard and Army Reserve (Army Guard and Reserve) soldiers in the wake of the September 11, 2001, terrorist attacks. These reports included examples of hundreds of soldiers receiving inaccurate and untimely payroll payments due to a paper-intensive, error-prone pay process and the lack of integrated pay and personnel systems. In response to our reports, DOD has taken some action to improve controls designed to pay Army Guard and Reserve soldiers accurately and on time, especially those who had become sick or injured in the line of duty. In March 2006, we reported that DOD’s policies and procedures for determining, reporting, and documenting cost estimates associated with environmental cleanup or containment activities were not consistently followed. Further, none of the military services had adequate controls in place to help ensure that all identified contaminated sites were included in their environmental liability cost estimates. These weaknesses not only affected the reliability of DOD’s environmental liability estimate, but also that of the federal government as a whole. In May 2005, we reported that DOD did not have management controls in place to assure that excess inventory was reutilized to the maximum extent possible. We found significant waste and inefficiency because new, unused, and excellent condition items were transferred and donated outside of DOD, sold for pennies on the dollar, or destroyed. Root causes for the waste and inefficiency included (1) unreliable excess property inventory data; (2) inadequate oversight and physical inventory control; and (3) outdated, nonintegrated excess inventory and supply management systems. The department is provided billions of dollars annually to operate, maintain, and modernize its stovepiped, duplicative, legacy business systems. Despite this significant investment, the department is severely challenged in implementing business systems on time, within budget, and with the promised capability. Many of the problems related to DOD’s inability to effectively implement its business systems can be attributed to its failure to implement the disciplined processes necessary to reduce the risks associated with these projects to acceptable levels. Disciplined processes have been shown to reduce the risks associated with software development and acquisition efforts and are fundamental to successful systems acquisition. The weaknesses that we found in DOD business systems implementations such as the Defense Travel System, the Logistics Modernization Program, and the Navy’s Enterprise Resource Planning (ERP) efforts illustrate the types of system acquisition and investment management controls that need to be effectively implemented in order for a given investment to be successfully acquired and deployed. Meeting the Challenge of Transforming DOD Financial and Business Management Practices. Successful reform of DOD’s fundamentally flawed financial and business management operations must simultaneously focus on its systems, processes, and people. DOD’s top management has demonstrated a commitment to transforming the department and has launched key initiatives to improve its financial management processes and related business systems such as the Financial Improvement and Audit Readiness (FIAR) Plan. However, DOD still lacks two key elements that are needed to ensure a successful and sustainable transformation effort. As we have previously recommended, DOD should develop and implement an integrated and strategic business transformation plan. Since 1999, we have recommended the need for a comprehensive, integrated strategy and action plan for reforming DOD’s major business operations and support activities. Critical to the success of DOD’s ongoing transformation efforts will be top management attention and structures that focus on transformation from a broad perspective and a clear, comprehensive, integrated, and enterprisewide plan that, at a summary level, addresses all of the department’s major business areas. Because of the complexity and long-term nature of DOD’s business transformation efforts, we again reiterate the need for a chief management officer (CMO) to provide sustained leadership and maintain momentum, as we have previously testified. The National Defense Authorization Act for Fiscal Year 2006 directs the department to study the feasibility of a CMO position in DOD. In this regard, the Institute for Defense Analysis issued its report in December 2006 and, among other things, called upon the Congress to establish a Deputy CMO (level III official) at the department. Further, in May 2006, the Defense Business Board recommended, among other things, the creation of a Principal Under Secretary of Defense, as a level II official with a 5-year term appointment, to serve as CMO. I strongly support a level II official and believe that someone at this level is needed to be successful given the magnitude of the challenge and the need to effect change across the department. It is important to note that a CMO would not assume the responsibilities of the undersecretaries of defense, the service secretaries, or other DOD officials for the day-to- day management of the department. Rather, the CMO would be responsible and accountable for planning, integrating, and executing the overall business transformation effort. The reason I am so passionate about the need for a CMO at DOD is that progress at DOD has historically been painfully slow. A host of well-intended past improvement initiatives has largely failed. I am concerned that without a CMO who is responsible and accountable for demonstrable results and sustained success, history will continue to repeat itself. In the area of agency financial and performance reporting, I see obtaining unqualified opinions on financial statements at all CFO Act agencies as the primary challenge. While significant progress has been made by many CFO Act agencies to prepare timely annual financial statements that can pass the scrutiny of a financial audit, several agencies continue to struggle to reach this milestone. For fiscal year 2006, five CFO Act agencies—DOD, DHS, National Aeronautics and Space Administration (NASA), and the Departments of Energy and Transportation—failed to meet this basic requirement. Problems at NASA and the Department of Energy stem from deficiencies in those agencies’ implementation of new financial management systems, among other things. The Department of Transportation auditors cited significant problems with a key accounting practice at the Federal Aviation Administration as the underlying cause for qualifying their opinion on the department’s financial statements. As I previously discussed, the problems faced by DOD are so pervasive that in accordance with section 1008 of the fiscal year 2002 National Defense Authorization Act, for the sixth year, DOD acknowledged that its systems could not support material amounts on DOD’s fiscal year 2006 financial statements and accordingly, the auditors did not perform auditing procedures and disclaimed an opinion. At DHS, the auditors recognized that the department has not yet established the infrastructure and internal control necessary and disclaimed an opinion on its financial statements. Problems at these agencies also significantly impact our ability to provide an opinion on the U.S. government’s consolidated financial statements. Meeting the Challenge of Improved Financial and Performance Reporting. Addressing the financial and performance reporting weaknesses that impede CFO Act agencies from obtaining unqualified or clean opinions on the respective agency financial statements will vary depending upon the circumstances at the agency. Developing and implementing corrective action plans to address the identified problems are time-honored methods for resolving such problems. For example, the DOD Comptroller launched the FIAR Plan to guide improvements to address financial management deficiencies and achieve clean financial statement audit opinions. This plan incorporates our prior recommendations and ties planned improvement activities at the component and department levels together with accountable personnel, milestones, and required resources. We view the incremental line item approach, integration plans, and oversight structure outlined in the FIAR plan for examining DOD’s operations and preparing for an audit as a significant improvement over prior financial improvement initiatives. However, we continue to stress that the effectiveness of DOD’s FIAR plan will ultimately be measured by the department’s ability to provide timely, reliable, and useful information for day-to-day management and decision making. Since the passage of the CFO Act and FFMIA, there has been progress in achieving the financial systems requirements of these landmark laws. While improvements have been made throughout government, much work remains to fulfill the underlying goals of the CFO Act and FFMIA. In fiscal year 1997, 20 agencies were reported as having systems that were not in substantial compliance with at least one of the three FFMIA systems requirements, while in fiscal year 2006, auditors for 17 of the CFO Act agencies reported that the agencies’ financial management systems did not substantially comply with at least one of the three FFMIA requirements. The major barrier to achieving compliance with FFMIA continues to be the inability of agencies to meet federal financial management systems requirements, which involve not only core financial systems, but also administrative and programmatic systems. While the problems are much more severe at some agencies than at others and progress has been made in addressing financial management systems’ weaknesses, the lack of substantial compliance with the three requirements of FFMIA, and the associated deficiencies, indicates that the financial management systems of many agencies are still not able to routinely produce reliable, useful, and timely financial information. Consequently, the federal government’s access to relevant, timely, and reliable data to effectively manage and oversee its major programs, which is the ultimate objective, was and continues to be restricted. What is most important is that the problem has been recognized. Across government, agencies have efforts under way to implement new financial management systems or to upgrade existing systems. Agencies expect that the new systems will provide reliable, useful, and timely data to support day-to-day managerial decision making and assist taxpayer and congressional oversight. Whether in government or the private sector, implementing and upgrading information systems is a difficult job and brings a degree of new risk. Organizations that follow and effectively implement accepted best practices in systems development and implementation (commonly referred to as disciplined processes) can manage and reduce these risks to acceptable levels. For example, as part of our work at DOD, NASA, and other agencies that have experienced significant problems in implementing new financial management systems, we have consistently found that these agencies were not following the necessary disciplined processes, human capital practices, and information technology management practices for efficient and effective development and implementation of such systems. Challenges also exist in implementing OMB’s financial management line of business initiative that is aimed at significantly improving the financial data government managers need to make timely and successful decisions and reduce the cost of government operations. For example, as we reported in March 2006, the requirements for agencies and private sector firms to become shared service providers and the services they must provide have not been adequately documented or effectively communicated to agencies and the private sector. We made several recommendations that focused on reducing the risk of this important initiative. During 2006, OMB addressed some of the weaknesses by issuing an initial version of migration planning guidance and publishing competition guidance for shared service providers and agencies. However, as OMB acknowledged in the Federal Financial Management Report 2007, it has not yet developed several critical elements needed to minimize risk, provide assurance, and develop understandings with software vendors, shared service providers, and agencies on topics such as standard business processes and common accounting codes. Further, a governmentwide concept of operations has not been developed that would identify interrelationships among federal financial systems and which financial management systems should be operated at an agency level and which should be operated at a governmentwide level and how those would integrate. In addition, processes have not been put in place to facilitate agency decisions on selecting a provider or focusing investment decisions on the benefits of standard processes and shared service providers. Meeting the Challenge of Modernizing Financial Systems. As the federal government moves forward with ambitious financial management system modernization efforts that identify opportunities to eliminate redundant systems and enhance information reliability and availability, adherence to disciplined processes, sound human capital practices, and proven information technology management practices is crucial to reduce risks to acceptable levels. To help address the underlying problems agencies face in implementing financial management systems that will help them adhere to the requirements of the CFO Act and FFMIA, we have made numerous specific recommendations to agencies to address the specific shortcomings we identified. For example, at NASA we made a total of 45 recommendations aimed at addressing weaknesses we identified in NASA’s acquisition and implementation strategy for a new integrated financial management system. The key to avoiding these long-standing problems is to provide specific guidance to agencies that incorporate the best practices identified by the Software Engineering Institute, the Institute of Electrical and Electronic Engineers, and other experts. Toward this end, we have recommended that OMB develop such guidance to help minimize the waste of scarce resources from modernization failures. We have also made a number of recommendations to OMB to help it provide a solid foundation for the financial management line of business initiative. OMB has projects under way to develop standard business processes, a common accounting code, and specific measures to assess the performance of the shared service providers to help address some shortcomings we identified. While all of these projects are important, developing a concept of operations is an important step because it lays the foundation for many subsequent decisions. While continuing progress has been made in strengthening internal control, at the same time, the federal government faces numerous internal control problems, some of which are long-standing and are well- documented at the agency level and governmentwide. As we have reported for a number of years in our audit reports on the U.S. government’s consolidated financial statements, the federal government continues to have material weaknesses and reportable conditions in internal control related to property, plant, and equipment; inventories and related property; liabilities and commitments and contingencies; cost of government operations; and disbursement activities, just to mention a few of the problem areas. Particularly problematic to the U.S. government’s consolidated financial statements is the lack of internal controls to adequately account for and reconcile intragovernmental activity and balances between federal agencies. Although OMB and Treasury require the CFOs of 35 executive departments and agencies to reconcile intragovernmental activity and balances on a quarterly basis, and report annually to GAO and others on reconciliation efforts at the end of the fiscal year, a substantial number of agencies did not adequately perform these reconciliations. To help address this problem, OMB worked with Treasury and the CFO Council to revise the business rules for intragovernmental transactions. Because these new rules became effective on October 1, 2006, it is too soon to tell if they will have the desired effect of strengthening internal controls. Resolving the intragovernmental transactions problem remains a difficult challenge and will require a strong commitment by agencies to fully implement the recently issued business rules and continued strong leadership by OMB. As we testified in February 2005, we support OMB’s efforts to revitalize internal control assessments and reporting through the December 2004 revisions to Circular No. A-123. These revisions recognize that effective internal control is critical to improving federal agencies’ effectiveness and accountability and to achieving the goals established by the Congress. They also considered the internal control standards issued by GAO, which provide an overall framework for establishing and maintaining internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. OMB reported in its Federal Financial Management Report 2007, that CFO Act agencies identified new financial reporting material weaknesses under this revised guidance, which is an important first step. As agencies expand their assessments and all agencies complete a full-scope assessment of internal control over financial reporting, they will develop a better understanding of the full nature and extent of material weaknesses. Effective internal control, as envisioned in the revised Circular No. A-123, inherently includes a successful strategy for addressing improper payments. Attacking improper payment problems requires a strategy appropriate to the organization involved and its particular risks. We have found that entities using successful strategies to address their improper payment problems shared a common focus of improving the internal control system—the first line of defense in safeguarding assets and preventing and detecting errors and fraud. The Congress acted strongly to address the improper payment problem by passing IPIA and in fiscal year 2005, OMB began to separately track the elimination of improper payments under the PMA. As I pointed out in testimony before this Subcommittee in December 2006, while agencies are making progress in reporting under IPIA, three major challenges remain in meeting the goals of the act. First, the existing reporting was incomplete because some agencies still had not instituted systematic methods to review all programs and some program estimates were not based on a valid statistical sampling methodology as required. Second, 10 risk-susceptible programs with outlays totaling over $234 billion in fiscal year 2005 had not provided improper payment estimates. Finally, OMB’s implementing guidance includes specific criteria that limit the disclosure and transparency of agencies’ improper payments. Meeting the Challenge of Addressing Internal Control Weaknesses. Actions can be taken on several fronts to help resolve internal control weaknesses. As pointed out in our February 2005 testimony on internal controls, there are six issues critical to effectively implementing the changes to Circular No. A-123—specifically, the need for: (1) development of supplemental guidance and implementation tools to help ensure that agency efforts are properly focused and meaningful; (2) vigilance over the broader range of controls covering program objectives; (3) strong support from managers throughout the agency, and at all levels; (4) risk-based assessments and an appropriate balance between the costs and benefits of controls; (5) management testing of controls in operation to assess if they are designed adequately and operating effectively, and to assist in formulating corrective actions; and (6) management accountability for control breakdowns. Addressing the multitude of problems in financial reporting internal controls, including reconciling intragovernmental activity and balances, that have been identified to date will require a significant effort over a long time. Many of these problems have been around for years and have proven resistant to actions to resolve them. Continuous monitoring by top agency management and OMB along with oversight by the Congress will be critical to successfully resolving these material weaknesses and enhancing financial management. The ultimate success of efforts to reduce improper payments depends, in part, on each agency’s continuing diligence and commitment to meeting the requirements of IPIA and the related OMB guidance. Full and reasonable disclosure of the extent of the problems could be enhanced by modifying the act’s underlying criteria used to identify which programs and activities are susceptible to significant improper payments and we asked the Congress to consider amending IPIA to do so. We also recommended that OMB’s implementing guidance be strengthened in several areas. The financial management workforce plays a critical role in government because the scale and complexity of federal activities requiring financial management and control are monumental. The federal government has always faced the challenge of sustaining the momentum of transformation because of the limited tenure of key administration officials. The current administration’s PMA has served as a driver for governmentwide financial management improvements. It has been clear from the outset that the current administration is serious about improved financial management. We have been fortunate that, since the passage of the CFO Act, all three administrations have been supportive of financial management reform initiatives. And, as I discussed earlier, we have seen a positive cultural shift in the way the federal government conducts business. Given the long- term nature of the comprehensive changes needed and challenges still remaining to fully realize the goals of the CFO Act, it is unlikely they will all occur before the end of the current administration’s term. Therefore, sustaining a commitment to transformation in future administrations will be critical to ensure that key management reforms, such as the CFO Act, are fully attained. Changing the way business is done in a large, diverse, and complex organization like the federal government is not an easy undertaking. According to a survey of federal CFOs, federal finance organizations of the future will have fewer people, with a greater percentage of analysts, as opposed to accounting technicians. However, today most functions within federal finance organizations are focused primarily on (1) establishing and administering financial management policy; (2) tracking, monitoring, and reconciling account balances; and (3) ensuring compliance with laws and regulations. While they recognize the need for change, according to the CFOs surveyed, many questions remain unanswered regarding how best to facilitate such changes. When it comes to world-class financial management, our study of nine leading private and public sector financial organizations found that leading financial organizations often had the same or similar core functions (i.e., budgeting, treasury management, general accounting, and payroll) as the federal government. However, the way these functions were put into operation varied depending on individual entity needs. Leading organizations reduced the number of resources required to perform routine financial management activities by (1) consolidating activities at a shared service center and (2) eliminating or streamlining duplicative or inefficient processes. Their goal was not only to reduce the cost of finance but also to organize finance to add value by reallocating finance resources to more productive and results-oriented activities like measuring financial performance, developing managerial cost information, and integrating financial systems. The federal financial workforce that supports the business needs of today is not well-positioned to support the needs of tomorrow. A JFMIP study indicated that a significant majority of the federal financial management workforce performs transaction support functions of a clerical and technical nature. These skills do not support the vision of tomorrow’s business which will depend on an analytic financial management workforce providing decision support. A 2005 survey of senior level federal CFO executives noted that the respondents still believed that mid- and lower-level personnel lack the skills needed for modern financial management. The 2005 survey indicated that the federal CFO community thought that overly complex civil service rules made it difficult to recruit entry-level talent and nearly impossible to hire middle managers from outside the government. Our work has shown that staffing shortages, particularly at key agencies such as DOD, DHS, and Treasury can adversely impact financial management operations. For example, as part of our work on the U.S. government’s consolidated financial statements, we found that personnel at Treasury’s Financial Management Service had excessive workloads that required an extraordinary amount of effort and dedication to compile the consolidated financial statements and that there were not enough personnel with specialized financial reporting experience to help ensure reliable financial reporting by the reporting date. Meeting the Challenge of Building the Financial Management Workforce. We have previously identified several factors that are critical to resolving financial management human capital issues. Part of the commitment to transformation is the establishment of skilled and sustained leadership through the creation of a chief management officer (CMO) at selected federal agencies. The CMO would serve as the strategic, enterprisewide integrator of efforts to transform agency business operations, including financial management. While we have called for the creation of such a position specifically at DOD and DHS, in July 2006, a major global consulting firm recommended that the concept of a chief operating officer be instituted in many federal agencies as the means to help achieve the transformation that many agencies have undertaken. Building a world-class financial workforce will require a workforce transformation strategy devised in partnership between CFOs and agency human resource departments, now established in law as Chief Human Capital Officers, working with OMB and OPM. Agency financial management leadership must identify current and future required competencies and compare them to an inventory of skills, knowledge, and current abilities of current employees. Then they must strategically manage to fill gaps and minimize overages through informed hiring, development, and separation strategies. This is similar to the approach that we identified when we designated strategic human capital management as a high-risk area in 2001. Achieving a successful financial management vision of the future will be directly determined by the workforce that supports it. In our view, adequate succession planning to ensure these positions and other key senior-level financial management positions are promptly filled with highly qualified staff will be a key success factor to help transform federal financial management. As you know, GAO is responsible for auditing the consolidated financial statements included in the Financial Report of the United States Government (Financial Report), but we have been unable to express an opinion on them for the 10th year in a row because the federal government could not demonstrate the reliability of significant portions of the financial statements, especially in connection with major financial management challenges that I discussed earlier regarding DOD. The lack of effective internal controls to adequately account for and reconcile intragovernmental activity and balances is another primary challenge that impedes our ability to provide an opinion on the consolidated financial statements. The third major impediment that prevents us from rendering an opinion on the consolidated financial statements is the federal government’s ineffective process for preparing the consolidated financial statements. As I previously discussed, addressing the first two impediments will be difficult challenges. Resolving the weaknesses in the systems, controls, and procedures for preparing the consolidated financial statements is also a formidable challenge. While further progress was demonstrated in fiscal year 2006, the federal government continued to have inadequate systems, controls, and procedures to ensure that the consolidated financial statements are consistent with the underlying audited agency financial statements, balanced, and in conformity with U.S. generally accepted accounting principles. Most of the issues we identified in fiscal year 2006 existed in fiscal year 2005, and many have existed for a number of years. In addition, Treasury could not provide the final fiscal year 2006 consolidated financial statements and supporting documentation in time for us to complete all of our planned auditing procedures. During our fiscal year 2006 audit, we found the following: Treasury showed progress by demonstrating that amounts in the Statement of Social Insurance were consistent with the underlying federal agencies’ audited financial statements and that the Balance Sheet and the Statement of Net Cost were consistent with federal agencies’ financial statements prior to eliminating intragovernmental activity and balances. However, Treasury’s process for compiling the consolidated financial statements did not ensure that the information in the remaining three 2006 principal financial statements and notes were fully consistent with the underlying information in federal agencies’ audited financial statements and other financial data. To make the fiscal years 2006 and 2005 consolidated financial statements balance, Treasury recorded net decreases of $11 billion and $4.1 billion, respectively, to net operating cost on the Statement of Operations and Changes in Net Position, which it labeled “Other - Unmatched transactions and balances.” An additional net $10.4 billion and $3.2 billion of unmatched transactions were recorded in the Statement of Net Cost for fiscal years 2006 and 2005, respectively. Treasury is unable to fully identify and quantify all components of these unreconciled activities. The federal government did not have an adequate process to fully identify and report items needed to reconcile the operating results, which for fiscal year 2006 showed a net operating cost of $449.5 billion, to the budget results, which for the same period showed a unified budget deficit of $247.7 billion. We also noted other deficiencies related to the adequacy of required disclosures and whether amounts reported are complete. Treasury continued to make progress in addressing certain other internal control weaknesses in its process for preparing the consolidated financial statements. However, internal control weaknesses continued to exist involving a lack of (1) appropriate documentation of certain policies and procedures for preparing the consolidated financial statements, (2) adequate supporting documentation for certain adjustments made to the consolidated financial statements, and (3) effective management reviews. As in previous years, Treasury did not have adequate systems and personnel to address the magnitude of the fiscal year 2006 financial reporting challenges it faced, such as (1) the Governmentwide Financial Report System (GFRS) undergoing further development and not yet being fully operational, and (2) weaknesses in Treasury’s process for preparing the consolidated financial statements noted above. One of the underlying causes of these weaknesses, as I discussed earlier, is the lack of sufficient personnel with specialized financial reporting experience to help ensure reliable financial reporting by the reporting date. Meeting the Challenge of Strengthening Consolidated Financial Reporting. During fiscal year 2006, Treasury, in coordination with OMB, developed and began implementing corrective action plans and milestones for short-term and long-range solutions for certain internal control weaknesses we have previously reported regarding the process for preparing the consolidated financial statements. In April 2006, we reported in greater detail on these issues and provided recommendations to OMB and Treasury. Resolving some of these internal control weaknesses will require a strong commitment from Treasury and OMB as they execute and implement their corrective action plans. Overcoming current challenges will be difficult, but after a decade of reporting at the governmentwide level perhaps now is an appropriate time to step back and consider the need for further revisions to the current federal financial reporting model, which would affect both consolidated and agency financial reporting. While the current reporting model recognizes some of the unique needs of the federal government, a broad reconsideration of the federal financial reporting model could address the following types of questions. What kind of information is most relevant and useful for a sovereign nation? Do traditional financial statements convey information in a transparent manner? What is the role of the balance sheet in the federal government reporting model? How should items that are unique to the federal government, such as social insurance commitments and the power to tax, be reported? Engaging in a reevaluation of this nature could stimulate discussion that would bring about a new way of thinking about the federal government’s financial and performance reporting needs. To understand various perceptions and needs of stakeholders for federal financial reporting, a wide variety of stakeholders from the public and private sector should be consulted. Ultimately, the goal of such a reevaluation would be reporting enhancements that can help the Congress deliberate strategies to address the federal government’s challenges, including those of our growing long- term fiscal imbalance. More specifically, we continue to support several specific improvements to federal financial reporting. For example, the federal government’s financial reporting should be expanded to disclose the reasons for significant changes during the year in scheduled social insurance benefits and funding. It should also include a Statement of Fiscal Sustainability— providing a long-term look at the sustainability of current federal fiscal policy in the context of all major federal spending programs and tax policies. The reporting on fiscal sustainability should include additional information that will assist in understanding the sustainability of current social insurance and other federal programs, including key measures of fiscal sustainability and intergenerational equity, projected annual cash flows, and changes in fiscal sustainability during the reporting period. We believe that such reporting needs to reflect the significant commitments associated with the Social Security and Medicare programs while recognizing a liability for the net assets (principally investments in special U.S. Treasury securities) of the “trust funds.” We support the current efforts of the Federal Accounting Standards Advisory Board (FASAB) to begin a project on fiscal sustainability reporting. In addition, an easily understandable summary annual report should be prepared and published that includes in a clear, concise, and transparent manner, key financial and performance information embodied in the Financial Report. Later in this statement, I offer other suggestions for improved reporting that will help in this regard. Successfully addressing the six primary challenges I just described will undoubtedly help strengthen the federal government’s financial and performance reporting and resolve many accountability and stewardship challenges. This will become increasingly important, because as I stated in our audit report included in the Financial Report, testified before the Congress, and emphasized in numerous speeches, the nation’s current fiscal path is unsustainable and tough choices by the President and the Congress are necessary to address the nation’s large and growing long- term fiscal imbalance. The federal government’s financial condition and fiscal outlook are worse than many may understand. We are currently experiencing strong economic growth and yet running large on-budget (operating) deficits that are largely unrelated to the Global War on Terrorism. Despite an increase in revenues in fiscal year 2006 of about $255 billion, the federal government reported that its costs exceeded its revenues by $450 billion (i.e., net operating cost) and that its cash outlays exceeded its cash receipts by $248 billion (i.e., unified budget deficit). Further, as of September 30, 2006, the U.S. government reported that it owed (i.e., liabilities) more than it owned (i.e., assets) by almost $9 trillion. In addition, the present value of the federal government’s major reported long-term “fiscal exposures”—liabilities (e.g., debt), contingencies (e.g., insurance), and social insurance and other commitments and promises (e.g., Social Security, Medicare)—rose from about $20 trillion to over $50 trillion in the last 6 years. The federal government faces large and growing structural deficits in the future due primarily to known demographic trends and rising health care costs. These structural deficits—which are virtually certain given the design of our current programs and policies—will mean escalating and ultimately unsustainable federal deficits and debt levels. Based on various measures—and using reasonable assumptions—the federal government’s current fiscal policy is unsustainable. In addition to considering the federal government’s current financial condition, it is critical to look at other measures of the long-term fiscal outlook of the federal government. An evaluation of the nation’s long-term fiscal outlook should include not only liabilities included in the Financial Report but also the implicit promises embedded in current policy and the timing of these longer-term obligations and commitments in relation to the resources available under various assumptions. Over the next few decades, the nation’s fiscal outlook will be shaped largely by known demographic trends and rising health care costs. As the baby-boom generation retires, federal spending on current retirement and health care programs—Social Security, Medicare, and Medicaid—will grow dramatically. A range of other federal fiscal commitments, some explicit and some representing implicit public expectations, also bind the nation’s fiscal future. Absent policy changes, a growing imbalance between expected federal spending and tax revenues will mean escalating and ultimately unsustainable federal deficits and debt levels. There are various ways to consider and assess the long-term fiscal outlook, including the Statement of Social Insurance, major reported long-term fiscal exposures, and long-term fiscal simulations. Statement of Social Insurance. The Statement of Social Insurance in the Financial Report displays the present value of projected revenues and expenditures for scheduled benefits of certain benefit programs that are referred to as social insurance (e.g., Social Security, Medicare). For Social Security and Medicare alone, projected expenditures for scheduled benefits for the next 75 years exceed earmarked revenues (e.g., dedicated payroll taxes, premiums, and existing government bonds in the trust funds) for the same period by approximately $39 trillion in present value terms. Stated differently, one would need approximately $39 trillion invested today to deliver on the currently promised benefits for the next 75 years. Table 1 shows a simplified version of the Statement of Social Insurance by its primary components. Major Reported Long-Term Fiscal Exposures. GAO developed the concept of “fiscal exposures” to provide a framework for considering the wide range of responsibilities, programs, and activities that explicitly or implicitly expose the federal government to future spending. The concept of fiscal exposures is meant to provide a broader perspective on long-term costs. Major reported long-term fiscal exposures in fiscal year 2006 with a present value totaling over $50 trillion consisted of $10 trillion of liabilities reported on the Balance Sheet, $1 trillion of other commitments and contingencies, and the $39 trillion of social insurance responsibilities, the last two of which are reported elsewhere in the Financial Report. This $50 trillion compares to about $20 trillion in fiscal year 2000. These large numbers are difficult to comprehend. Table 2 seeks to translate them into several figures and ratios that are more understandable. Long-Term Fiscal Simulations. Another way to assess the U.S. government’s long-term fiscal outlook and the sustainability of federal programs is to run simulations of future revenues and costs for all federal programs, based on a continuation of current or proposed policy. The simulations GAO has published since 1992 are designed to do that. As shown in figure 1, GAO’s long-term simulations—which are neither forecasts nor predictions—continue to show ever-increasing long-term deficits resulting in a federal debt level that ultimately spirals out of control. The timing of deficits and the resulting debt buildup varies depending on the assumptions used, but under either optimistic (“Baseline extended”) or more realistic assumptions, the federal government’s current fiscal policy is unsustainable. Over the long term, the nation’s growing fiscal imbalance stems primarily from the aging of the population and rising health care costs. Absent significant changes on the spending or revenue sides of the budget or both, these long-term deficits will encumber a growing share of federal resources and test the capacity of current and future generations to afford both today’s and tomorrow’s commitments. Continuing on this unsustainable path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our domestic tranquility and national security. If, for example, as shown in figure 2, it is assumed that recent tax reductions are made permanent and discretionary spending keeps pace with the growth of our economy, our long-term simulations suggest that by 2040 federal revenues may be adequate to pay little more than interest on debt held by the public and some Social Security benefits. Neither slowing the growth in discretionary spending nor allowing the tax provisions, including the tax cuts enacted in 2001 and 2003, to expire—nor both together—would eliminate the imbalance. At some point, action will need to be taken to change the nation’s fiscal course. The sooner appropriate actions are taken, the sooner the miracle of compounding will begin to work for the federal budget rather than against it. Conversely, the longer that action to deal with the nation’s long- term fiscal outlook is delayed, the greater the risk that the eventual changes will be disruptive and destabilizing. Acting sooner rather than later will give us more time to phase in gradual changes, while also providing more time for those likely to be most affected to make compensatory changes. The “fiscal gap” is a quantitative measure of long-term fiscal imbalance. Under GAO’s more realistic simulation, assuming debt held by the public remains at the current share of the economy (i.e., GDP), closing the fiscal gap would require spending cuts or tax increases equal to 8 percent of the entire economy each year over the next 75 years, or a total of about $61 trillion in present value terms. To put this in perspective, closing the gap would require an immediate and permanent increase in federal tax revenues of more than 40 percent or an equivalent reduction in federal program spending (i.e., in all spending except for interest on the debt held by the public, which cannot be directly controlled). Although the long-term fiscal outlook is driven primarily by rising health care costs and known demographics, we cannot ignore other government programs and activities. There is a need to engage in a fundamental review, reprioritization, and reengineering of the base of government. Aligning the federal government to meet the challenges and capitalize on the opportunities of the 21st century will require a fundamental review of what the federal government does, how it does it, and how it is financed. Many of the federal government’s current policies, programs, functions, and activities are based on conditions that existed decades ago, are not results-based, and are not well aligned with 21st century realities. We need to address the growing costs of the major entitlement programs and also review and reexamine all other major programs, policies, and activities on both the spending and the revenue side of the budget. Programs that run through the tax code—sometimes referred to as tax expenditures—must be reexamined along with those that run through the spending side. As we move forward, the federal government needs to start making tough choices in setting priorities and linking resources and activities to results. Meeting our nation’s large, growing, and structural fiscal imbalance will require a multipronged approach: increasing transparency and enhancing the relevancy of key financial, performance, and budget reporting and estimates to highlight our long- term fiscal challenges; reinstituting and strengthening budget controls for both spending and tax policies to deal with both near-term and longer-term deficits; strengthening oversight of programs and activities, including creating approaches to better facilitate the discussion of integrated solutions to crosscutting issues; and reengineering and reprioritizing the federal government’s existing programs, policies, and activities to address 21st century challenges and capitalize on related opportunities. In my January 2007 testimony, I proposed a number of ideas for consideration to improve the transparency of long-term costs. In November 2006, I provided the congressional leadership with recommendations, based on the work of GAO, for consideration for the agenda of the 110th Congress. These recommendations focused on three areas: (1) targets for near-term oversight, (2) policies and programs that are in need of fundamental reform and reengineering, and (3) governance issues. One of the areas I pointed out that warranted congressional attention was the development of a portfolio of outcome-based key national indicators (e.g., economic, security, social, environmental) to help measure progress toward national outcomes, assess conditions and trends, and help communicate complex issues. The Congress could take a leadership role in highlighting the need for a U.S. national indicator system to inform strategic planning, enhance performance and accountability reporting, inform congressional oversight and decision making, and stimulate greater citizen engagement. In my view, this should include consideration of a public/private partnership to help make this key concept a reality sooner rather than later. In order to effectively address our long-term fiscal imbalance, fundamental reform of existing entitlement programs is essential. However, entitlement reform alone will not get the job done. We also need to reprioritize and constrain other federal government spending and generate more revenues—hopefully through a reformed tax system. GAO’s 21st Century Challenges: Reexamining the Base of the Federal Government contains a suggested list of specific federal activities for reexamination, illustrative reexamination questions, and perspectives on various strategies, processes, and approaches for congressional consideration stemming from our audit and evaluation work that can be used in reexamining the federal base. Answers to these questions may draw on the work of GAO and others; however, only elected officials can and should decide which issues to address as well as how and when to address them. Addressing these problems will require tough choices, and our fiscal clock is ticking. As a result, the time to start is now, to help save our future. In closing, given the federal government’s current financial condition and growing long-term fiscal imbalance, the need for the Congress and the President to have timely, reliable, and useful financial and performance information is greater than ever. Sound decisions on the current results and future direction of vital federal government programs and policies are more difficult without such information. Until the problems discussed in this testimony are effectively addressed, they will continue to have adverse implications for the federal government and the taxpayers. Since enactment of federal financial management reform legislation, we have seen continuous movement toward the ultimate goals of accountability laid out in the different financial management statutes. While early on some were skeptical, these laws have dramatically changed how financial management is carried out and the value placed on good financial management across government. Across government, financial management improvement initiatives are underway, and if effectively implemented, have the potential to greatly improve the quality of financial management information as well as the efficiency and effectiveness of agency operations. By the end of my term as Comptroller General, I would like to see the civilian CFO Act agencies routinely producing not only annual financial statements that can pass the scrutiny of a financial audit, but also quarterly financial statements and other meaningful financial and performance data to help guide decision makers on a day-to-day basis. For DOD, my expectations are not as high given the current status of DOD’s financial management practices, yet it is realistic for at least major portions of DOD’s financial information to become auditable by the end of my term. Moreover, progress on developing meaningful financial and performance reporting on the federal government will be a key area that I will continue to champion. I am determined to do whatever I can to help ensure that we are not the first generation to leave our children and grandchildren a legacy of failed fiscal stewardship and the hardships that would bring. Finally, I want to emphasize the value of sustained congressional interest in these issues, as demonstrated by this Subcommittee’s leadership. It will be key that going forward, the appropriations, budget, authorizing, and oversight committees hold agency top leadership accountable for resolving the remaining problems and that they support improvement efforts that address the challenges for the future I highlighted today. The federal government has made tremendous progress, and sustained congressional attention has been and will continue to be a critical factor to ensuring achievement of the goals and objectives of management reform legislation. Mr. Chairman, this completes my prepared statement and I want to thank you for the opportunity to participate in this hearing and for the strong support of this Subcommittee in addressing the need for financial management reform and accountability. I would be happy to respond to any questions you or other members of the Subcommittee may have at this time. For information about this statement, please contact Jeffrey C. Steinhoff, Managing Director, Financial Management and Assurance, at (202) 512- 2600 or McCoy Williams, Director, Financial Management and Assurance, at (202) 512-9095 or [email protected]. Individuals who made key contributions to this testimony include Felicia Brooks, Robert Dacey, Kay Daly, Francine DelVecchio, Gary Engel, Susan Irving, Jay McTigue, Diane Morris, and Paula Rascona. Numerous other individuals made contributions to the GAO reports cited in this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The foundation laid by the Chief Financial Officers Act of 1990 and other management reform legislation provided a much needed statutory basis to improve the accountability of government programs and operations. Such reforms were intended to produce reliable, timely, and useful financial information to help manage day-to- day operations and exercise oversight and promote fiscal stewardship. This testimony, based on GAO's prior work, addresses (1) the progress made and challenges remaining to improve federal financial management practices, and (2) the serious challenges posed by the government's deteriorating long-range fiscal condition and the Comptroller General's views on a possible way forward. Since the enactment of key financial management reforms, the federal government has made substantial progress in improving financial management activities and practices. Federal financial systems requirements have been developed, and internal control has been strengthened. Nonetheless, the federal government still has a long way to go to address the six principal challenges to fully realizing strong federal financial management: (1) transforming financial management and business practices at DOD, (2) improving agency financial and performance reporting, (3) modernizing financial management systems, (4) addressing key remaining internal control weaknesses, (5) building a financial management workforce for the future, and (6) strengthening consolidated financial reporting. From a broad financial management perspective, the federal government's financial condition and fiscal outlook are worse than many understand. We are currently experiencing strong economic growth and yet running large on-budget (operating) deficits that are largely unrelated to the Global War on Terrorism. The federal government faces large and growing structural deficits in future years due primarily to known demographic trends and rising health care costs. If it is assumed that recent tax reductions are made permanent and discretionary spending keeps pace with the growth of our economy, GAO's long-term simulations suggest that by 2040, federal revenues may be adequate to pay little more than interest on debt held by the public and some Social Security benefits. Neither slowing the discretionary spending growth nor allowing certain tax provisions to expire--nor both together--would eliminate the imbalance.
|
The overall transition known as “restructuring” in the electricity industry reflects a shift from a monopolistic to a more competitive industry. The electric utility industry was considered one of the nation’s most regulated industries, with states regulating utilities’ retail or intrastate activities and the federal government regulating utilities wholesale or interstate transactions. In the past, electricity service providers enjoyed a natural monopoly, providing electricity generated by their plants, transmitted over their power lines, and distributed to their customers. Two key factors led that monopolistic structure to move toward a more competitive marketplace. First, new technologies reduced the cost and size of generating electricity effectively. Currently, there is a preference for small-scale production facilities that can be brought on-line more quickly and cheaply with fewer regulatory impediments. Second, federal changes were made in the industry’s regulation. Specifically, the enactment of the Public Utility Regulatory Policies Act (PURPA) of 1978 initiated the process for a transition or restructuring to a freer electric power market by requiring utilities to buy electricity produced by nonutility producers. Then in 1992, the Energy Policy Act (EPACT) was enacted and removed several regulatory barriers to entry into electricity generation and promoted further competition. Restructuring is underway for wholesale markets, which involve the sale of electricity for resale. It is also underway for some retail sales to end users, which include residential, commercial, industrial, and other consumers. Federally regulated wholesale power markets already provide market-based prices. States, however, vary greatly in their response to restructuring: some states have introduced competition to the retail markets in their states, others have begun to restructure but then delayed or suspended these efforts, and others still have taken no steps to restructure their markets. As the industry adapts to restructuring, many utilities are facing greater competition from nonutilities and other new entities such as power marketers. The introduction of competition has considerably expanded the number and types of business arrangements involving generation and transmission of electricity. In addition, proposals for new regulatory structures to oversee the new industry are emerging. FERC and EIA are the two leading entities for the collection, analysis, and evaluation of electric power information. FERC collects information to assure just and reasonable rates on the basis of costs. FERC also collects and obtains information from other federal and nonfederal sources to monitor and regulate competitive markets for wholesale electricity to similarly determine if these prices are just and reasonable. EIA is mandated to collect, assemble, evaluate, analyze, and disseminate energy data and energy information for the Congress, the federal government, the states, and the public. The Federal Power Act (FPA) of 1935, PURPA, and EPACT drive FERC’s information collection activities. FPA authorizes FERC to collect and record information to the extent it deems necessary and to prescribe rules and regulations concerning accounts, records, and memoranda. In general, FPA provides for federal oversight of interstate transmission and wholesale sales by public utilities. Forty-three years later, the Congress enacted PURPA in response to the unstable energy climate of the late 1970s. PURPA authorizes FERC to collect information on the basic cost and quality of fuels at electric generating plants. FERC uses such data to conduct fuel reviews and rate investigations and to track market changes and trends. In addition, PURPA requires public utilities to report on electric energy shortages and contingency plans to FERC and appropriate state agencies. In 1992, the Congress enacted EPACT. EPACT created a new category of power sellers called exempt wholesale generators that are not subject to regulation under the Public Utility Holding Company Act (PUHCA), which governs how utilities can be legally organized. These power sellers must apply to FERC for PUHCA exemption. Legislation created EIA and defined its information collection activities. In 1974, the Congress enacted the Federal Energy Administration Act that created the Federal Energy Administration. The act mandated the Federal Energy Administration to collect, assemble, evaluate, and analyze energy information for the federal government, state governments, and the public and provided it with information collection enforcement authority for gathering information from energy producing and consuming firms. Two years later, the Energy Conservation and Production Act established the Office of Energy Information and Analysis, mandating it to operate a comprehensive National Energy Information System; possess expertise in energy analysis and forecasting; coordinate information activities with federal agencies; promptly provide upon request any energy information to any duly established committee of the Congress; and make periodic reports on the energy situation and trend to the Congress. In 1977, by enacting the Department of Energy Organization Act, the Congress established EIA as the federal authority for energy information. This act gave EIA independence to collect energy data and report energy information, including all the provisions of its predecessor, and established an annual survey to gather and report detailed energy industry financial data. In 1992, EPACT required EIA to expand its data gathering and analysis in several areas, including energy consumption, alternative-fueled vehicles, greenhouse gas emissions, fossil fuel transportation rates and distribution patterns, electricity production from renewable energy sources, and foreign purchase and imports of uranium. Federal agencies’ information collection activities are subject to the Paperwork Reduction Act. The purpose of the Paperwork Reduction Act is to minimize the paperwork burden for all individuals and entities that must report information to the federal government. The Office of Management and Budget (OMB) oversees governmental initiatives to reduce the paperwork burden and improve the management of information resources. The Paperwork Reduction Act requires federal agencies to submit their data collection tools to OMB for review. OMB is also responsible for the implementation of the Government Paperwork Elimination Act, which requires federal agencies, by October 21, 2003, to allow individuals, or entities that interact with the agencies, the option of submitting information to agencies electronically, whenever practicable. The North American Electric Reliability Council (NERC) is one of the most important nonfederal entities that collect data from the electricity industry. NERC, formed as a result of a devastating outage in the northeast during November 1965, was established to promote the reliability of the interconnected electric power system. NERC membership is voluntary, consists of representatives from utilities across North America, and provides a forum for the electric utility industry to develop policies, standards, and guidelines designed to ensure reliability. One of its key functions is to collect information from its members, among other things, on power plant operations and outages. NERC reports information in an aggregated format to protect information its members consider sensitive. Federal agencies collect three types of electricity-related information for widely varying purposes in accordance with their different missions. Some agencies such as FERC, EIA, RUS, SEC, and EPA collect information on an ongoing, regular basis, using forms or form-like surveys. However, there is a time differential between the reporting period, when the information is collected and when an agency reports the information. As a result, the information usually does not reflect current market conditions. Restructuring has led to a greater need for a second type of information, focusing on current activities, for purposes of monitoring by FERC in particular. Third-party sources, such as Bloomberg’s Professional Services, provide current and historical information on regional electricity and gas markets, including spot and future prices, market commentary, plant outage information, and energy news. Investigations create a need for a third type of information, when an agency such as the Department of Justice gathers information mainly in conjunction with specific company criminal investigations. To meet their missions, agencies collect a wide variety of electricity-related information. FERC and EIA are the primary gatherers of such information while other agencies, such as the Federal Trade Commission, have gathered information only for occasional reports. As shown in figure 1, FERC, DOE’s EIA and Office of Fossil Energy, RUS, and EPA specifically collect information related to generation, transmission, and/or distribution functions of electric power. FERC, EIA, and RUS collect information related to all three of these functions. In addition, EIA collects end-user information such as residential, commercial, and industrial usage. Additionally, DOE’s Office of Fossil Energy is the only office that collects information related to electricity imports and exports. Finally, EPA collects emissions information related to the generation of power. The following graphic depicts federal agency information collections within these functions. FERC, an independent regulatory agency, was established in 1977 as a successor to the Federal Power Commission. In addition to regulating and overseeing the interstate transmission and interstate wholesale sales of natural gas and electricity, FERC regulates the interstate transmission of oil by pipeline; licenses and inspects private, municipal, and state hydroelectric projects; and approves site choices as well as decisions to abandon interstate pipelines and related facilities no longer in use. In responding to this mission, FERC stated that it chooses regulatory approaches that foster competitive markets whenever possible, assures access to reliable service at a reasonable price, and gives full and fair consideration to environmental and community impacts in assessing the public interest of energy projects. Among its other duties, it reviews the rates set by the four federal power marketing administrations. FERC does not have legislative authority over electricity generation siting, construction of transmission lines, intrastate transmission, or retail sales, all of which fall under state or local jurisdiction. FERC also has no direct authority over system reliability—that is, ensuring that consumers can obtain electricity from the system, when, and in the amount, they want. Furthermore, FERC’s jurisdiction extends primarily to investor-owned utilities. FERC generally does not have jurisdiction over federally owned utilities, publicly owned utilities, or most cooperatively owned utilities. In 2000, FERC created the Office of Markets, Tariffs, and Rates, which was until recently responsible for regulating and overseeing competitive energy markets. In 2002, FERC created the Office of Market Oversight and Investigation (OMOI), which is still under development, to actively monitor developing competitive electricity markets. FERC collects information from the electricity industry, among other energy industries. According to a 2002 FERC memorandum regarding current information collections, FERC has 19 information collection activities that apply specifically to the electricity industry. This information generally focuses on activities related to generation and fuel, transmission, energy sales and purchases, consumption and distribution, and financial information. In addition, it has three other information collection activities that relate to all three energy industries (electric, natural gas, and oil pipeline). These collection activities generally focus on information needed to conduct financial and compliance audits, preservation of records, and complaint procedures. Various legislative authorities authorize FERC’s information collection activities and compliance is mandatory. The Office of Markets, Tariffs, and Rates uses the information from these collection activities to provide historical context and assist it in regulating and overseeing the terms and conditions for energy transactions regulated under the traditional cost-of-service basis, and more recently, approval of electricity company mergers. Additionally, other offices, such as the Office of Administrative Litigation, which is responsible for litigating or resolving cases set for hearings, use the information as the basis for hearings. Traditionally, FERC primarily relied on standardized forms to routinely collect information, authorized by the statute and/or regulation, from entities within the electric sector. In the past, most of the information for these forms was submitted on paper, but FERC is currently moving toward electronic submissions for all of its information collection activities. FERC also has established reporting requirements where entities must make specific information available to it; however, these requirements are reported using a mix of standardized forms or formats. As with the forms, reporting requirements are submitted on paper and/or electronically. (See app. I for a summary of FERC’s forms.) OMOI uses FERC’s traditional information collection activities, mentioned above, to provide historical context to assist in understanding company activities and during investigations of specific companies. However, in light of evolving electricity markets, OMOI also subscribes to both commercial and proprietary information services to access information related to current market activities. Such services provide electricity market information such as prices on the spot market and futures contracts, plant outage information, and historical trend analysis. OMOI uses this information to oversee electricity markets and ensure market participants are not manipulating these markets. (See app. II for FERC’s third-party sources of current market information.) Two organizations within DOE are primarily responsible for collecting electricity-related information. These organizations, EIA and the Office of Fossil Energy (Fossil Energy), rely on forms to collect an enormous amount of information at regular intervals. EIA collects this information for a wide variety of statistical analyses and may assume responsibility for gathering the lesser amount of information for which Fossil Energy has been responsible. EIA is the principal source of comprehensive energy information for the Congress, the federal government, the states, and the public. According to EIA’ s strategic plan, its mission is to provide high quality, policy-independent energy information to meet the requirements of government, industry, and the public in a manner that promotes sound policymaking, efficient markets, and public understanding. The plan further states that EIA’s sole purpose is to provide reliable and unbiased energy information. To meet its goal of providing high-quality energy information, the plan states that EIA will provide comprehensive information (data, analyses, and forecasts) for all energy types (including electricity), stages (production, conversion, distribution, supply, consumption, and price) and impacts (technical, economic, and environmental). EIA currently uses about 75 different forms to collect information on all aspects of energy, but only 9 of these forms focus on electricity. (See app. I for a summary of these forms.) All EIA forms are mandatory, with the exception of one part of one specific form as noted in the appendix. Information is collected annually or monthly. For its monthly surveys, EIA collects information from a sample of electricity entities, while the full universe is surveyed annually. In commenting specifically on its electric power information collection program, EIA notes that its information can be categorized into four broad information classes: physical systems, operational statistics, financial statistics, and organizational information. Physical system information provides the technical specifications for the generators, boilers, pollution control equipment, and transmission lines that make up the industry. Operational statistics provide the monthly and annual details of how the physical plant is operated to satisfy customer demand. Financial statistics consist of balance sheets, income statements, and supporting account information to determine the cost of producing electricity and providing related service. Organizational information describes the basic characteristics of the entities that comprise the electric power industry, including ownership and control, affiliations, and identification and geographical information. EIA and its customers use the information it collects for a variety of purposes. These include monitoring of market trends in supply, demand, and prices; analytical activities such as short- and long-term forecasting; and inputs to special studies, such as responses to congressional inquiries. EIA is also responsible for making sure its data are available to the public in easily accessible and user-friendly formats. Among its other responsibilities, Fossil Energy is responsible for the federal international electricity program, which consists of two elements: (1) granting presidential permits for the construction and operation of electric transmission lines that cross the U.S international border and (2) authorizing exports of electric energy to foreign countries. Fossil Energy collects information on electric power imports and exports from both presidential permit and authorized export holders. The mandatory information is used in an annual report that summarizes the electricity trade between the United States and Mexico or Canada during each calendar year. A Fossil Energy official told us that EIA will eventually take over the responsibility of collecting information on imports and exports of electricity. EPA’s mission requires it to collect electricity-related information for regulatory purposes. In this regard, one of EPA’s most important initiatives is its Acid Rain Program implemented in 1995. The program specifies that all existing utility units serving generators with an output capacity of greater than 25 megawatts and all new utility units must report their emissions. The emissions that must be reported include sulfur dioxide, nitrogen oxide, and carbon dioxide. While not an emission, the unit heat input (the caloric value of the fuel burned) must also be reported. The program’s overall goal is to achieve significant environmental and public health benefits through reductions in emissions of sulfur dioxide and nitrogen oxide, the primary causes of acid rain. In most cases, utility units use a continuous emission monitoring system. Units report hourly emissions information to EPA on a quarterly basis. The information for the three types of emissions and the unit heat input is then recorded in the Emissions Tracking System, which serves as a repository of information on the utility industry. At the end of each calendar year, EPA uses this information to compare the tons of actual emissions reported with each company’s authorized emissions. If a company exceeds its limits, then it will be penalized in accordance with the rules of the program. The tracking system is EPA’s primary, electricity-related database used for regulatory purposes. It represents a significant commitment of personnel with about 40 full-time-equivalent staff currently assigned to its maintenance and use. In addition to the three emissions included in the Emissions Tracking System, EPA has focused particular attention on mercury emitted by coal-fired electric utilities. An EPA report in February 1998 identified mercury emissions from coal-fired plants as the toxic air pollutant of greatest concern for public health from these sources. This report and collected data were used to call for additional monitoring of mercury emissions so that a regulatory control strategy could be developed. Then, in November 1998, the agency announced its decision to require coal-fired electricity generating plants to collect and report such information for 1 year. The agency collected detailed information on mercury during 1999. It obtained information on (1) every coal-fired boiler in the United States, (2) mercury in samples of coal used by boilers, and (3) actual mercury emissions from the stacks of a randomly selected group of coal-fired boilers. The information, which was used to estimate 1999 nationwide and plant-by-plant mercury emissions from coal-fired boilers, confirmed that coal-fired plants are the largest source of human-caused mercury emission in the United States—about 43 tons of mercury each year. Further, in December 2000, the agency announced its decision to propose regulations to control mercury emissions from coal- and oil-fired plants by December 2003. EPA has also developed the Emissions and Generation Resource Integrated Database, the first complete database of emissions and resource mix for virtually every power plant and company that generates electricity in the United States. The Emissions and Generation Resource Integrated Database does not collect original information but assembles information already collected by EPA’s Emissions Tracking System, EPA’s 1999 mercury study, FERC, and several EIA forms. Taking advantage of previously confidential information on nonutility generators, the Emissions and Generation Resource Integrated Database reports its information for all U.S. power plants, including nonutility plants. The information, which encompasses more than 4,600 power plants and nearly 2,000 generating companies, is used to provide plant-specific analyses of emissions. It can also be aggregated at various levels, for example, individual states and larger regions, to provide more comprehensive analyses of issues relating to air quality. RUS officials told us that RUS has a different responsibility from other agencies that also collect information on electricity. RUS is a lending agency whereas EIA is a statistical agency and FERC and EPA are regulatory agencies. For this reason, according to these officials, there are distinct differences in the nature of the information collected. Because RUS is a lending agency, it seeks information primarily to determine the financial status of the entities wanting loans. As part of this effort, officials are interested in obtaining information on the sale and purchase of electricity and especially in determining whether their borrowers are buying and selling power from each other. RUS officials told us that it provided about $4 billion in loans during 2002 and has a total of about $34 billion in outstanding funds plus new loans. Potential borrowers have to meet RUS’s criteria as serving rural consumers and also criteria for financial viability. In reviewing new loan requests for generating plants, these officials use their database to identify the need for and viability of each new plant. They analyze the ability of the prospective borrower to function in competitive markets. They told us that 45 percent or more of the electricity sold by rural electricity cooperatives comes from outside sources and that, almost without exception, they depend on transmission from outsiders. Some loans to nonprofit cooperatives are for facilities that may become part of a transmission system operated by an independent system operator (ISO) or a regional transmission organization (RTO). RUS uses two main forms to collect the relevant information. Both forms state their purpose as being to review an applicant’s financial situation. These forms collect information about the financial condition, assets, and operations of rural cooperatives. The SEC was established under the Securities Exchange Act of 1934 as an independent, nonpartisan, quasi-judicial regulatory agency charged with administering federal securities laws. SEC’s mission is to protect investors in securities markets that operate fairly and to ensure that investors have access to all material information concerning publicly traded securities. SEC also regulates firms engaged in the purchase or sale of securities, people who provide investment advice, and investment companies. To promote the disclosure of important information, enforce securities laws, and protect investors, SEC requires companies under its jurisdiction to file transactional, periodic, and annual reports using standardized data collection forms. SEC was charged with administering PUHCA, which defines a holding company as any company that directly or indirectly owns, controls, or holds with power to vote, 10 percent or more of the outstanding voting securities of a public-utility company. Intrastate holdings and holdings meeting certain corporate standards may be exempted from the requirements of the act. Under the act, SEC regulates public utility holding companies. As of October 31, 2002, there were 18 electricity-and-gas and 7 electricity-only, registered holding companies. SEC collects information from exempted and registered public utility holding companies through its general filing requirements and a set of forms designed with the sole purpose of enforcing the act. The registered holding companies engaged, through subsidiaries, in the electric utility business are subject to more rigorous reviews for transactions that might affect their financial and corporate structure. The collection of information from such holding companies registered under PUHCA ensures that SEC has comprehensive information on holding companies conducting substantial activities in more than one state. Other agencies, including the Department of Justice, the Federal Trade Commission, and the Commodity Futures Trading Commission, do not collect information on an ongoing, regular basis. Both the Department of Justice and the Federal Trade Commission have responsibilities to enforce antitrust laws, among others. According to an official in the Department of Justice’s Antitrust Division, the Division gathers electricity-related information for an informal investigation that may evolve into a case or a formal investigation associated with a specific case. The impetus for these investigations may come from the trade press and other news sources reviewed by the Department of Justice, a complaining party (a customer or competitor, for example), a request by FERC, or congressional inquiries. Referring to the informal type of investigation, the official said that reviews of the trade press or other news sources sometimes suggest a potential problem. If a trend emerges, additional general information is gathered on the issue. This may lead to a finding that there is no further ground for concern or to the opening of a formal investigation. The official added that companies are obligated to file information about transactions related to mergers or acquisitions subject to premerger notification requirements. If further information is needed to assess the effects of the transaction on competition, the Department of Justice can make a second request for more detailed information. The Federal Trade Commission is also responsible for enforcing a variety of federal antitrust and consumer protection laws and seeks to ensure that the nation’s markets function competitively. According to testimony provided by one of the Federal Trade Commission Commissioners, the application of federal antitrust laws can help in this transition to competition by making sure that mergers do not aggravate market power problems or shield incumbent companies from new competition. A Federal Trade Commission assistant general counsel stated, however, that the Federal Trade Commission’s current involvement with electricity markets is minimal and that it has no ongoing information collection or standardized forms to obtain information on electricity markets. He also noted that the agency’s activity was largely confined to two reports and some comments on other federal and state agencies’ proposed rulemakings. The first report focused on features of competition in electricity markets that would benefit consumers, and the second updated the first with a greater focus on retail competition. In preparing the second report, the Federal Trade Commission issued a notice seeking comments and looked at 10 representative states for which information was obtained from state Web sites and state regulatory commission personnel. The second report identified “trouble spots” in developing competitive markets and recommended steps for states to take in addressing problems. It also identified the barriers for entry into these markets by new suppliers and the conditions conducive for new suppliers to enter into these markets, but it did not conclude that such conditions by themselves would cause new suppliers to not enter the market. With its reports completed, the Federal Trade Commission has discontinued its information collection on electricity markets. In addition, the Federal Trade Commission’s role in reviewing mergers, including those involving the electricity industry, has declined. The assistant general counsel commented that the Federal Trade Commission shares with the Department of Justice and FERC the responsibility for reviewing information relating to mergers. According to the official, the Federal Trade Commission’s role in reviewing such information, however, has decreased because the rate of mergers has diminished recently. The Commodity Futures Trading Commission, an independent agency created by the Congress in 1974, regulates commodity futures and option markets in the United States. The agency protects market participants against manipulation, abusive trade practices, and fraud. Initially, agency officials stated that the agency had essentially no role in collecting information on electricity at present. An agency official said that, for a period starting in 1996 and ending in 2000, the agency received information on trading in electricity futures conducted through the New York Mercantile Exchange, but this trading was discontinued because its participants found that electricity futures failed to provide an adequate “hedge” or protection against intermittent price volatility. However, according to an agency official, the New York Mercantile Exchange has since introduced several new electricity contracts, and the Commodity Futures Trading Commission will obtain information on these contracts. Such information will include, for example, contract details on prices, trading volume (purchases and sales), and descriptions of large trades. Restructuring, which has led to increasingly complex market activities with greater need for oversight, has highlighted the need for sharing information. Agencies are increasingly using the Internet and a mix of other methods to enhance their ability to share information with other agencies and the public. In the past, agencies provided paper copies of published reports through their public reference rooms and upon request. However, since the advent of the Internet, most federal agencies are using it to allow access to publicly available documents. For example, EIA regularly publishes reports providing electricity-related statistics and now uses its Web site to allow easy access to current and past reports. FERC also makes publicly available information accessible through its Web site using its Federal Energy Regulatory Records and Information System, which contains over 20 years of documents submitted to and issued by FERC. Despite the increased use of the Internet, agencies also maintain public reference rooms where paper copies of documents are made available. Although federal agencies make extensive amounts of information available on their Internet Web pages, they share information using a combination of other methods such as meetings, investigations, conferences, and workshops. Specifically, a FERC official stated that FERC currently holds quarterly meetings with the Federal Trade Commission and the Department of Justice to discuss overlapping issues, specifically focusing on antitrust and market manipulation practices. The official added that FERC has met with EIA to coordinate and share information on information collection issues. Another FERC official stated that FERC does not have formal protocols to interact with other agencies such as SEC, the Commodity Futures Trading Commission, and the Federal Bureau of Investigations; however, FERC also interacts with these agencies on an ad hoc basis to assist them with their information needs and use “shared access letters” to request information from other agencies’ files. For example, FERC staff coordinated closely with the Department of Justice, SEC, the Commodity Futures Trading Commission, and the Department of Labor during their investigation of Enron. Recently, FERC cosponsored a technical conference with the Commodity Futures Trading Commission to discuss energy market credit issues and potential solutions to problems and their implementation. Some agencies, such as Justice and the Federal Trade Commission, use formal approaches such as interagency agreements and established protocols to coordinate their work and share information. Of the eight federal agencies included in our review, we found that restructuring has significantly affected FERC while other agencies were affected to a lesser extent. To respond to competitive markets, FERC has made important changes, for example, creating a new office to actively monitor markets to ensure they are competitive. These changes have affected its organizational structure and information collection activities. However, FERC is limited in the information it is allowed to collect, primarily because of limitations in its authority. To diminish gaps in its information, FERC relies on information from third-party sources, some of which is suspect. Although less affected than FERC by restructuring, EIA has also made some changes to its information collection activities. For example, it has increased the number of entities it reports on and the amount of information collected and changed how it uses this information. Restructuring has affected other agencies’ collection of electricity information to a more limited extent but has raised other issues that affect how they share information. Over the past year, FERC has changed the way it performs market oversight from one that reacts to electricity market events to one that monitors markets on a day-to-day basis. This change has caused FERC to reassess the information it needs to monitor these markets. During 2002, FERC created a new office to actively monitor competitive electricity markets and undertook efforts to identify sources of market information and better understand its own information needs. Nonetheless, we found that FERC has gaps in the information it is allowed to collect, primarily because of limitations in authority. Consequently, FERC has increased its reliance on information from third-party sources in order to supplement the information it collects. However, this third-party information also has gaps, and we question the reliability of some of this information, as have others. Additionally, FERC plans to have RTOs and ISOs assist it by monitoring and routinely collecting information on electricity markets, but the formation of these organizations remains in question. In response to the evolving electricity markets, FERC realized that it needed to reorganize and created OMOI in fiscal year 2002 to monitor increasingly competitive electricity markets. OMOI’s mission is to guide the evolution and operation of energy markets to ensure effective regulation and consumer protection through understanding markets and their regulation, timely identification and remediation of market problems, and compliance with FERC rules and regulations. To carry out its monitoring mission, OMOI uses its Market Monitoring Center that was patterned after market operation centers or rooms of ISOs and major energy trading companies. The Market Monitoring Center relies on computers and various software packages to make large amounts of information on electricity available in a usable format. The center uses both commercial and proprietary information services to access current market activities. Electricity market information provided by these services includes prices on the spot market and futures contracts, plant outage, and historical information for trend analysis. FERC also subscribes to another new service provider that offers current information on the status and output of some generating units. OMOI also uses the historical information from the traditional FERC data collection activities to assist in its work. For example, during investigations, FERC’s forms for routine information collection provide historical baseline information that may be critical in determining possible market manipulations and/or unjustified prices. Appendix II contains information on the commercial and proprietary information services FERC uses and descriptions of the types of information provided. In fiscal year 2002, FERC completed studies to take stock of the agency’s current and future market information needs. As a part of this effort, FERC formed teams that were to identify information that FERC currently collects and additional information that it might need. The study on current information needs identified 19 active information collection and reporting requirements for the electric energy sector and three that relate to all three energy sectors (electric, natural gas, and oil pipeline). The study on future information needs identified a core body of information FERC must know to adequately understand how it might exercise its oversight authority and information needs to accommodate a range of regulatory approaches. The core body of information includes eight categories and the specific data elements, descriptions, and potential sources of this information. The categories are demand for electric power, supply, operations and congestion management, market participants, transmission transactional information, market design and rules, and traditional regulatory functions. FERC’s intention was to make the information catalogues a “wish list” of every conceivable type of information FERC might ever want or need. According to OMOI, it is using the information from these two studies as a baseline to assess FERC’s overall market information needs. OMOI hired an energy industry analyst to continue with the information assessment project. The project’s mission statement focuses on information needs both in the near term and long term. The near-term objective is to ensure FERC has the information most necessary to perform its duties in restructured energy markets. FERC’s current information collection activities do not provide sufficient information to fully monitor electricity markets. First, the historical information FERC collects has deteriorated in quality, in part, because of declines in power plant information reporting. Specifically, FERC has found that some of the data fields that companies are required to fill out are left blank in some cases. To improve data quality, FERC officials stated that FERC recently improved its error checking capability for one of its recently developed electronic reports. In addition, some companies have aggregated sales transactions data on the forms in a way that makes it impossible to determine specific prices and quantities sold. Further, FERC’s coverage of power plant operational information has diminished because some plants formerly owned by utilities are now owned by nonutilities that are not required to report to FERC. Prior to restructuring, FERC specifically used the information reported on power plant fuel costs and quality as a factor to determine electricity rates. Under restructuring, FERC uses power plant information to understand power production and available capacity in specific markets, and to understand what is normal or anomalous. According to FERC, power outages could be used as strategies to reduce supply and thereby raise market prices. In June 2002, we reported that California power supplier behavior described in other studies we reviewed was consistent with the exercise of market power, because the prices charged did not reflect the marginal costs of generating additional megawatt-hours of electricity. Rather, the behavior reflected an ability to charge higher prices by waiting to commit the generation to a time when buyers were willing to pay more. Second, according to FERC and as we previously reported, FERC generally has no jurisdiction over power sales by federally owned entities, publicly owned utilities, and most cooperatively owned utilities. These nonjurisdictional utilities own 27 percent of the U.S. electric transmission system and are also smaller than investor-owned utilities; however, they serve large areas of the country and provide service in conjunction with about 25 percent of the nation’s demand for electricity. FERC officials note that they have little data and information on these areas of the country. However, according to FERC officials, information about the operations of these nonjurisdictional entities is important to understand these entities’ impact on generation and transmission activities in a given market. For example, the Tennessee Valley Authority operates a large power system and serves many nonjurisdictional entities covering a large geographical area across the southeastern United States located between several FERC jurisdictional entities. According to FERC, the lack of detailed information about the operations of the Tennessee Valley Authority system limits its ability to assess the performance of the markets surrounding this network. Similarly, FERC officials noted that they also need information on electricity imports from neighboring countries, particularly Canada, because they participate in and affect prices of electricity in U.S. markets. Third, according to FERC officials, they have limited up-to-the-minute market information needed to monitor electricity markets. FERC does not collect price information, for example, on up-to-the-minute electricity prices, fuel costs, and spot and futures contract prices. In June 2002, we reported that the Market Monitoring Center did not include detailed information about energy prices on “exempt” commercial markets, including the Intercontinental Exchange, a “multilateral” electronic trader, which invites and matches buy and sell orders for other customers. According to FERC, it now has access to Intercontinental Exchange but no longer has access to other Internet-based trading systems such as UBS Warburg and Dynegydirect, both of which were “bilateral” electronic traders because they have ceased operations. Such systems have and continue to provide an important market for both physical energy (electricity and gas products) as well as energy derivatives to be bought and sold. In commenting on a draft of this report, FERC stated that because its authority is ambiguous relative to the trade of electricity-based derivatives, its ability to collect information on this part of the market is limited. Additionally, FERC officials said that they have limited operational information, such as power plant outages and availability of capacity on transmission lines. Price and transaction information, as well as operational information, is important in order for FERC to be able to detect changes in the market, determine the legitimacy of market outcomes, and if needed, take corrective action. Finally, FERC officials told us that FERC cannot access other nonfederal information it needs to assess reliability of the power grid and monitor overall electricity market performance. Specifically, NERC collects current electricity market information such as operations of power plants, flows on key transmission lines, transmission between two parties, and system frequency (that is, a measure of how well the system is balancing electricity demand and supply and other reliability information). FERC officials pointed out that because market performance and electricity system reliability are mutually dependent, such reliability information would help them to determine whether market participants are behaving in an anticompetitive manner. While NERC officials agreed that this information might be valuable to FERC in determining whether power plant outages are justifiable, they stated that NERC is prohibited from disseminating such information without obtaining the companies’ permission—which companies are reluctant to grant due to the business-sensitive nature of the information. Further, NERC officials told us that their database is deteriorating in quality because companies are increasingly concerned about sharing detailed information, for fear that competitors may gain an undue advantage. In particular, many new market entrants to the electricity generating industry have not joined NERC or provided NERC with information about their plant operations. In commenting on a draft of this report, FERC stated that language in proposed legislation creating FERC jurisdiction over a designated electric reliability organization should assist in addressing issues related to access to NERC information. As we previously reported, FERC lacks authority to gather all the information it needs from all segments of wholesale electricity markets primarily because it derives much of its legislative authority from mandates that were enacted over 75 years ago—when the industry was structured as regulated monopolies and rates were based on the cost of service. Further, we reported, FERC lacks regulatory authority over all entities in wholesale electricity markets and is therefore unable to gather all of the information it needs to understand markets across the nation. Specifically, section 309 of the Federal Power Act provides FERC with the authority to prescribe the forms of all reports to be filed with it and the information to be reported. This authority does not generally extend to nonjurisdictional entities such as the power marketing administrations, other nonutilities, and NERC. For example, FERC has identified problems in getting data on individual power plant operations that it needs in order to evaluate the functioning of the transmission system. Information on nonjurisdictional entities is important because they also participate in the same electricity markets as jurisdictional entities and directly influence market activities, including prices. Senior FERC officials told us that, in general, FERC’s authority to collect information from nonjurisdictional market participants is predicated on developing a specific legal argument that the information supports a specific investigation, rather than for more general monitoring of market performance. Furthermore, regarding entities within FERC’s jurisdiction, FERC does not have specific authority to collect up-to-the-minute detailed information on market activities. While long-standing general authority may enable FERC to collect the information it needs, the lack of specific authority for obtaining this information may lead to challenges from market participants. In this same vein, FERC officials added that FERC also faces challenges related to the Paperwork Reduction Act in terms of the long lead time and the level of effort necessary to obtain OMB’s approval for additional information collections. Additionally, FERC’s legislative framework does not allow it to levy a meaningful range of penalties against companies that choose to intentionally underreport or misreport required information. Although the Federal Power Act allows FERC to levy criminal fines and civil penalties against market participants, they are insufficient to discourage underreporting or misreporting information. Thus, FERC’s traditional legislative authority may no longer be in sync with today’s developing competitive electricity markets. In competitive energy markets, adequate and reliable information is important to FERC’s ability to fulfill its regulatory mandate and ensure the market participants are not engaging in anticompetitive behavior. In commenting on a draft of this report, FERC stated that market transparency provisions in proposed legislation prohibits the filing of false information and increases FERC’s criminal penalty authority for noncompliance. FERC increasingly relies on third-party information to help offset its limited authority to collect all of the information it needs to monitor electricity markets. OMOI subscribes to several energy-related services to increase its access to current markets and make key decisions related to market performance. (See app. II for a complete listing and description of information that third parties provide to FERC to assist its monitoring of electricity markets.) While these third-party sources fill some of FERC’s information gaps, they do not have full or complete coverage of the information FERC needs but lacks. For example, while Genscape measures power plant operations for some power plants, it does not have full coverage of the electricity system. Moreover, OMOI does not have access to a third-party source for price or quantity information on most bilateral transactions of wholesale electricity. In addition, FERC and others have raised concerns about the quality of the published price information these third parties provide. Specifically, FERC reported that published prices are subject to manipulation and cannot be independently validated. FERC surveyed reporting firms for both natural gas and electricity and found that these firms lacked formal verification or corroboration and sufficient internal controls to ensure information reported to them was reliable. FERC also found that these entities relied instead on traders or bid/ask prices reported by traders and other market participants. As a result, FERC reported that this lack of verification allowed an opportunity for entities to deliberately misreport information in order to manipulate prices and/or volumes in electricity. In at least one recent instance, FERC used such third-party information as a basis of a key decision regarding California’s electricity market—the information, however, later turned out to be inaccurate. Specifically, in 2002, FERC instructed an administrative law judge, who was considering a request for refunds related to the western electricity crisis, to use a methodology that relied on third-party data for natural gas prices. The methodology, developed by FERC staff, was intended to set a proxy for market prices that would have been produced had the western market been competitive. The methodology estimated the cost of producing electricity for key generators based on operating cost, including fuel. Using this methodology, the judge ordered refunds of about $1.8 billion to the state of California. Subsequent to the order, FERC found, in August 2002, that the natural gas prices underlying the methodology had been subject to erroneous reporting and manipulation. In March 2003, FERC presented an alternative methodology for determining refunds, which is expected to substantially increase the previous award. In commenting on a draft of this report, FERC stated that it is working on options--based on staff recommendations and through a docket proceeding--to improve third-party data. FERC added that market transparency provisions in proposed legislation allow for establishing an electronic system that provides information about prices in electricity markets, in addition to the prohibition for filing false information and increased criminal penalty authority noted in the previous section. In addition to the third-party information, FERC plans to rely extensively on RTOs and ISOs to assist in its monitoring efforts. FERC plans to use the market monitors, created as part of ISOs, to perform up-to-the-minute market monitoring activities and routinely collect information on their electricity markets. FERC officials stated that the market monitors have a better ability to understand and observe market changes, can react more quickly to changing market conditions, and can take stronger corrective action than FERC. In addition, as part of the rules sanctioning these entities, FERC officials said they expect to have access to all the data collected by the market monitors, which FERC views as considerable. According to FERC, it currently obtains timely information from some existing RTO and ISO monitors to help support its market oversight processes. However, FERC officials said that, relative to the Paperwork Reduction Act, they are not sure whether the market monitors will be able to collect information on FERC’s behalf that FERC itself has not been authorized to collect. In commenting on a draft of this report, FERC stated that it is mindful of the potential burden imposed by additional information collections. FERC added that it has been inventive in developing ways to monitor markets, particularly restructured markets with RTOs and ISOs, using data generated as an integral part of market operations. Further, as we previously reported, several of the market monitors rely on different methods to evaluate market power, there is a lack of uniformity in what information is collected, how it is analyzed, and what is reported, making potential cross-market comparisons difficult at this time. More importantly, FERC’s effort to expand the number and/or market coverage of RTOs as well as standardize electricity market rules has met with resistance from the Congress, state commissions, and others. At present, according to FERC, two organizations have been approved as RTOs while five others have been conditionally approved. Overall, even if these additional RTOs are fully approved, FERC’s coverage will not extend to markets outside of its jurisdiction. Thus, FERC’s reliance on RTOs to help it diminish data gaps, particularly in the next several years, will likely provide only limited help. In commenting on a draft of this report, FERC believes that market transparency provisions in proposed legislation will address issues related to jurisdictional entities that do not participate in RTOs. Among the other agencies, EIA has been the most affected by restructuring while the remaining agencies have been affected only slightly. At EIA, restructuring has led to changes in the number of entities from which EIA collects data, the volume of data collected on electricity markets, and the way in which EIA uses the data to complete its mission of examining the energy sector. EIA officials recognized that restructuring could affect them and examined the potential implications in two reports. According to a senior EIA official, the first, and most important effect of restructuring on EIA was its revision of its forms to require the same information from utilities and nonutilities. Historically, nonutilities were exempt from many of EIA’s reporting requirements. Adding these new entities has expanded EIA’s database by about 2,000 new sources of information and has nearly doubled its database. The second effect of restructuring on EIA is the increase in the volume of information that it collects and provides because restructuring has significantly expanded the role of wholesale markets in providing electricity. For example, EIA now posts electricity prices for several of the largest markets on its Web site and reports more detailed information about the aggregate activities of these markets in its publications. The third effect of restructuring on EIA is to significantly alter the way that EIA examines energy sectors and electricity in particular. In order to meet one of its missions of examining and forecasting energy consumption and use, EIA has had to revise its energy models to accommodate restructuring because of changes in the way that electricity is supplied and distributed. For example, in March 2003, EIA reported that it reviewed and revised how it collects, estimates, and reports fuel use for facilities producing electricity. According to EIA, the review addressed inconsistent reporting of fuels for electric power by combined heat and power plants and changes in the electric power marketplace that have been inconsistently represented in various EIA survey forms and publications. EIA regards these efforts as complex and substantial and expects them to continue as the electricity sector evolves. EIA also encountered problems such as maintaining the quality of information. The Director of EIA’s Electric Power Division said that the Department of Energy’s Secretary has made the quality of information one of the department’s top priorities. However, maintaining this quality at EIA is a challenge because there has been a substantial increase in the number of sources of information (especially the nonutilities) resulting from restructuring while EIA has also experienced substantial budget cuts. The Director estimated that there has been a 50 percent increase in the overall volume of data. In addition, the Director said that, while omission of information by companies responding to EIA’s data collection efforts is not a common problem, in the past, some companies failed to answer a question about the delivered fuel price on EIA’s Form 423. The Director added that the companies’ decision not to disclose information about fuel prices could have been attributed to the sensitive nature of this particular item. Restructuring has had little direct effect on SEC’s overall information collection activities. As such, according to SEC officials, the SEC continues to carry out its oversight of securities laws and its administration of PUHCA. However, the Congress is considering repealing or modifying PUHCA because the emergence of nonutilities reflects the fact that utilities are no longer the sole source of electricity energy. FERC and SEC officials acknowledge that since nonutilities are not covered by PUHCA, registered holding companies may engage in nonutility activities that are not regulated by the act. SEC has stated that it supports the repeal of PUHCA as long as repeal is accomplished in a way that gives FERC and state regulators sufficient authority to protect utility consumers. FERC has stated that PUHCA, as it currently exists, may actually impede competitive markets and appropriate competitive market structures. Recent events such as the collapse of Enron Corporation have accelerated reforms affecting SEC that aim at improving the quality and reliability of financial information. SEC plays a vital role in ensuring that meaningful and intelligible information is disclosed to investors. Such disclosures are particularly important as corporate structures of new and old electricity market participants continue to change. The Sarbanes-Oxley Act of 2002 has established the legal framework to address some of the concerns related to corporate disclosure, accountability, and transparency. Restructuring has not had significant effects on the collection of electricity information by the other agencies included in our review. Some agencies, such as the Federal Trade Commission and the Commodities Futures Trading Commission, may become more involved in collecting electricity information as competitive markets develop. For example, the number of electricity entity mergers has slowed down, but should these mergers increase, Justice may need to increase its information collections for merger investigations accordingly. In addition, a Commodities Futures Trading Commission official initially told us that electricity futures trading had been discontinued because the market participants found that electricity futures failed to provide an adequate hedge against intermittent price volatility. However, since our initial discussion, the New York Mercantile Exchange has introduced several new electricity contracts, and the Commodities Futures Trading Commission has reinstated its practice of collecting information on these trades. Despite the more limited impacts of electricity restructuring on many of these agencies to date, some jurisdictional issues have been raised about their respective roles in helping to oversee electricity markets more generally. Events such as the collapse of Enron Corporation bring to light the importance of clarifying jurisdiction across the federal government as restructuring progresses. As noted in a recent Senate Governmental Affairs report and memorandum, and other congressional hearings, both FERC and SEC have been questioned about their lack of diligence in following through on Enron’s activities—even though they had indications of improper conduct. The report commented that effective coordination between agencies prevents companies from exploiting the lack of oversight in areas where neither agency may have taken full responsibility—as Enron did with FERC and SEC in the case of its investments in wind farms. Officials at both FERC and SEC told us that they had performed their jobs and had no reason to check with the other agency about Enron’s actions. However, Enron took advantage of jurisdictional gaps between the two agencies that enabled it to earn tens of millions of dollars above what it would have otherwise earned from its wind farms. FERC and the Commodity Futures Trading Commission provide a second example of problems resulting from jurisdictional uncertainties. The Senate memorandum (noted previously) on FERC pointed out that FERC did not initially determine whether it had jurisdiction over on-line trading platforms such as Enron Online, although it was FERC’s expectation that these electronic trading platforms would become a dominant way to trade both electricity and gas. Furthermore, this memorandum concluded that both FERC and the Commodity Futures Trading Commission had some regulatory responsibility for on-line trading. Until Enron’s collapse, however, the two agencies did not participate in meaningful discussions to identify and coordinate their respective roles. Effective coordination would have helped to clarify the jurisdictional boundaries between FERC and the Commodity Futures Trading Commission regarding energy trading activities and products, including on-line trading, and to define the two agencies’ respective monitoring responsibilities in these developing markets. Both agencies have recently taken steps to improve their coordination. Because these jurisdictional issues remain unresolved, however, it is unclear whether these problems are limited to a few examples or are potentially more widespread. Restructuring has made the issue of confidentiality concerning electricity information more prominent. On the one hand, the need to access key information now is greater in evaluating the benefits and risks of restructuring. On the other hand, the sensitivity of this information, according to the companies asked to provide it, is also greater because of fears that other companies could use it to seek competitive advantages. This dilemma has led to controversy about the electricity information that is to be made publicly available and shared with other federal agencies. Both EIA and FERC have procedures regarding restrictions on access to information and have modified these procedures, as appropriate. For example, EIA faced considerable protest for its proposal to restrict access to the information that it collects but updated its procedures to resolve some of the concerns raised. By contrast, public disclosure laws and confidentiality pledges to protect information also affect information sharing and collection of other key information at both federal and nonfederal levels. For example, NERC’s information collected from the electricity industry remains unavailable to FERC and other federal agencies because of its sensitivity. In addition, the quality of the information being submitted to NERC has declined as companies have become increasingly concerned about providing it. In addition to the confidentiality issues, the events of September 11, 2001, have heightened national security concerns about protecting the nation’s energy. In 2001, EIA faced a major controversy over confidentiality of electricity information, which it was able to resolve. The controversy pitted certain companies that feared potential competitive harm from the release of sensitive information, against agency and public interest in maintaining access to electricity data. Federal agencies and a private sector group provided extensive comments on EIA’s proposal to broaden the information it considered confidential. EPA, for example, objected to EIA’s proposed confidential treatment of fuel consumption, fuel quality, fuel type, thermal output, and retail sales. EPA officials noted that EPA makes extensive use of these data elements in monitoring emissions. In general, it maintained that EIA’s proposal went far beyond what was reasonably necessary to protect competitors from the release of sensitive data. The American Public Power Association, which represents the nation’s 2,000 nonprofit, publicly owned electric utilities, described itself as “deeply troubled” by EIA’s proposal. It stated that EIA had provided no evidence that the public availability of specific data items would harm the filing companies and no evidence on how EIA balanced the public’s need for information against any potential harm to these companies. By contrast, the Edison Electric Institute, which represents shareholder-owned electric companies, cited potential harm to companies, for example, information in the hands of a competitor that could allow the competitor unfairly to undercut another company’s bid strategy. In response to these disagreements over confidentiality, EIA issued a policy statement that made two general changes to its procedures. First, it reported that some data elements that were not considered confidential in the past would now be treated as confidential. Second, it reported that some data collected from unregulated companies that were formerly treated as confidential would now be made publicly available. Discussing the eventual resolution of the controversy, the Director of EIA’s Electric Power Division told us that EIA adopted two strategies to achieve this balance. These strategies involve (1) requiring essentially the same information from all companies, including utilities and nonutilities, and (2) identifying appropriate time frames for retaining and releasing sensitive data. The details of the data remaining confidential are presented in appendix III. FERC recognizes the need of utilities to compete in the electric market and understands their desire to keep confidential some of the information it collects through its forms. However, FERC’s policy requires respondents who request confidentiality to show that potential harm outweighs the need for public access to the information. According to FERC, the courts have, through a considerable body of case law, clearly stated that the company bears the obligation, in this case the electric utility, to prove release of information would cause harm. FERC officials told us that Freedom of Information Act requirements raise concerns among utilities about FERC’s ability to protect commercially sensitive information. The act requires FERC to disclose information to the public unless specific exemption categories are met, which FERC officials told us, is often difficult to do. According to FERC officials, while FERC may be willing to share exempted information with state regulatory bodies, states have similar public disclosure laws that do not always guarantee their ability to protect this information. FERC officials added that RTOs will also face similar challenges in sharing commercially sensitive information. While RTOs could benefit by sharing information about entrants from other markets who are interested in entering their own markets, protecting the confidentiality of this information will be an issue. In commenting on a draft of this report, FERC stated that proposed legislative language provides a clear confidentiality standard, exempting “from disclosure information FERC determines would, if disclosed, be detrimental to the operation of an effective market or jeopardize system security.” Finally, FERC also faces challenges creating ways to obtain and share information from Canada and Mexico, since they also affect U.S. electricity markets. NERC is hesitant to share information with FERC that its members feel would cause them competitive harm if released in the public domain. According to a NERC official, companies are increasingly reluctant to provide commercially sensitive information, causing a decline in information quality. Therefore, NERC has pledged not to divulge information on a company-specific basis and will release it only in aggregate form in hopes of getting the information it needs. NERC collects current electric market information such as flows on key transmission lines, transmission between two parties, and system frequency (an indicator of how well the system is balanced) that FERC is interested in obtaining from NERC, but, according to both FERC and NERC officials, confidentiality pledges inhibit this sharing this information. Since September 11, 2001, the federal government has taken steps to protect the nation’s critical infrastructures, including the energy infrastructure. FERC has taken steps to remove information it considers to be critical to protecting the nation’s power grid from the public domain. Specifically, it has removed information such as oversized maps that detail the specifications of existing and proposed energy facilities that were once publicly available from its Internet site, public reference rooms, and databases. For example, FERC removed the information its collects from the Form 715, Annual Transmission Planning and Evaluation Report, from the public domain. Additionally, EIA removed power plant latitude and longitude information from the public domain. While steps have been taken to better protect information, federal officials at both FERC and EPA raised concerns about the increasing difficulty of accessing information on power plant locations and related data. Given FERC’s predominant role in overseeing evolving electricity markets, FERC needs information on a regular basis regarding reliability, supply and demand, transmission, purchase and sale of electricity commodities, and market participants—much of the needed information has not previously been collected. Consequently, FERC is currently missing some of these key pieces of information or is relying on third parties such as energy news services for related information to assist in meeting its market monitoring and oversight responsibilities. Without access to this key information, FERC will not be able to fully understand the performance of specific electricity markets across the country. In addition, FERC will be less prepared to identify potential market manipulation that may affect competitive markets. FERC’s existing authority is not adequate to collect all the information it needs, resulting in these gaps of key information. Moreover, legislation does not allow FERC to levy meaningful criminal fines and civil penalties against market participants to ensure that companies report accurate and reliable information, further diminishing its ability to identify potential market manipulation. For these reasons, the Congress may need to make decisions regarding the scope of information collection at FERC and other agencies. Given that effective oversight of evolving electricity markets requires the acquisition of and access to timely, reliable, and complete information, we recommend that the Chairman, FERC (1) demonstrate what information FERC needs, (2) describe the limitations resulting from not having this information, and (3) ask the Congress for sufficient authority to meet its information collection needs and responsibilities. Additionally, we recommend that FERC consider the cost and potential reporting burden associated with additional information collection, since market participants will incur additional costs and burden hours, and where possible, explore creative ways to obtain information. We provided a draft of this report to FERC and DOE for their review and comment. In its written comments, FERC generally agreed with the report’s conclusions, specifically that its authority to collect information has not kept pace with the changing electricity market and that its ability to penalize noncompliance is severely limited. Regarding our recommendation that FERC take action to resolve its information gaps, FERC commented that it is in the process of conducting an internal information assessment and the results will be provided at the end of 2003. This assessment should provide a first step toward implementing our recommendations. However, in a related point, FERC also noted that whatever information gaps exist with electricity supply, much greater deficiencies exist on the demand side of the market, which is largely beyond its jurisdiction but also important to understanding the entire market. FERC also noted that it must be mindful of the potential burden imposed by additional information collections, and it has been inventive in developing ways to monitor markets, particularly those operating under its restructuring rules. FERC also provided several small corrections to the draft report language and added other clarifications that we incorporated into the draft where appropriate. The complete text of FERC’s comments is included in appendix IV. In its written comments, DOE agreed that the report generally characterizes the current state of electricity data collection and dissemination at EIA accurately and that it provides a balanced set of recommendations on improving the timeliness of data dissemination in the electricity industry’s restructured environment. DOE also commented about our characterization of EIA’s mission and how EIA’s information is used, as well as provided further clarification on the coverage of EIA and RUS information collections and EIA’s resolution of data quality issues on its Form 423. We incorporated EIA’s suggested information in these areas along with previously provided technical corrections into the draft where appropriate. The complete text of DOE’s written comments is included in appendix V. To determine what electricity information is collected, used, and shared by key federal agencies in meeting their primary responsibilities, we first identified federal agencies using specific forms and form-like surveys for collecting electricity information. These agencies included FERC, EIA and Fossil Energy within DOE, RUS, SEC, and EPA. We obtained these forms and form-like surveys and analyzed their contents, as summarized in appendix I. We also identified third-party sources of information used by federal agencies. These included the 13 companies identified in appendix II. We analyzed this third-party information through a review of Web-based materials and interviewed officials at Genscape, Edison Electric Institute, and NERC. We also identified federal agencies that collect, or have collected, electricity information for investigations and interviewed officials at these agencies that included the Department of Justice, the Federal Trade Commission, and the Commodity Futures Trading Commission. For all federal agencies included in our review, we obtained information on their missions by examining mission statements on their Web sites. To understand how federal agencies use and share electricity information, we interviewed federal officials at the federal agencies mentioned above. To determine the effect of restructuring on federal agencies’ collection, use, and sharing of this information, we focused primarily on FERC because it bears the main responsibility for monitoring electricity markets, is undergoing major organizational changes caused by restructuring, and has shown serious deficiencies in responding to restructuring. Within FERC, we met with officials from OMOI and from its Office of Markets, Tariffs, and Rates. To understand the gaps in FERC’s electricity information resulting from restructuring, we interviewed officials at FERC and NERC, reviewed information from third-party sources, and identified federal authority contributing to these gaps. Although restructuring has affected other federal agencies to a lesser extent, we identified the relevant effects, if any, in these other agencies by interviewing officials and reviewing pertinent documents. Among these other agencies, EIA has been the most affected by restructuring. We examined specific impacts at EIA that included increases in the number of entities from which EIA collects data and the volume of information collected. We also examined jurisdictional issues posed about FERC and SEC, and FERC and the Commodity Futures Trading Commission. To understand how restructuring has affected the way in which federal agencies share this information, we examined concerns about confidentiality, particularly as they related to EIA’s development of its current confidentiality policy and FERC’s lack of access to NERC information because of NERC’s concerns about the potential sensitivity of the information. In addressing the second objective, we also relied on a broad range of our previously issued reports on electricity restructuring and FERC’s oversight of electricity rates. We conducted our work from June 2002 to May 2003 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days after the date of this letter. At that time, we will send copies to appropriate congressional committees, the Chairman of the Federal Energy Regulatory Commission, the Secretary of the Department of Energy, the Administrator of the Environmental Protection Agency, the Secretary of the Department of Agriculture, the Chairman of the Securities and Exchange Commission, the Attorney General of the United States, the Chairman of the Federal Trade Commission, the Chairman of the Commodity Futures Trading Commission, the Director of the Office of Management and Budget, and other interested parties. We will make copies available to others on request. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix VI. Used to monitor markets to ensure that rates are just and reasonable, and services are offered in a nondiscriminatory manner. Generation, transmission, distribution, and sales of electric energy from major electric utilities and licensees subject to FERC jurisdiction. to collect and record data to the extent it deems necessary and to prescribe rules and regulations concerning accounts, records, and memoranda. FERC may prescribe a system of accounts for jurisdictional companies and after notice and opportunity for hearing, may determine the accounts in which particular outlays and receipts will be entered, changed, or credited. Department of Energy’s of each year for the previous calendar year. (DOE) Energy Information Administration publishes Form 1 data in aggregate form. Accountant uses the data in its audit program and for continuous review of the financial conditions of regulated companies. The Office of Markets, Tariffs, and Rates uses the data in rate proceedings and supply programs. Policy and General Counsel use the data in their programs. Data from schedules are used to compute annual charges assessed against public utilities. commissions use the data to help satisfy their reporting requirements for public utilities and licensees subject to state jurisdiction. Same as above. Same as above. Same as above. Same as above. FPA sections 205 and 206, as amended by section 208 of the Public Utility Regulatory Policies Act (PURPA) (Public Law 95-617). FERC is authorized to collect basic cost and quality of fuel data at electric generating plants. Collects information on the cost and quality of fossil fuels delivered to electric generating plants. Cost, price, and quality of fuels for generating plants. FPA section 3 and sections 201 and 210 of PURPA. These statutes authorize FERC to encourage cogeneration and small power production and to prescribe such rules as necessary in order to carry out these statutory directives. Filed by owners or operators of small power production or cogeneration facilities seeking status as qualifying facility eligible for benefits under PURPA, including exemption from certain corporate, accounting, reporting and rate regulation requirements; certain state laws; and where applicable regulation under FPA. Information related to the facility’s ownership and technical specifications. Title II, section 211 of PURPA, which amended part III, section 305, of FPA. Section 305 defines monitoring and regulatory operations concerning interlocking directorate positions held by public utility personnel and possible conflicts of interest. Collects information from individual public utility directors and officers who hold interlocking directorates. Information on public utilities’ interlocking directorates for possible conflicts of interest. Publicly available. No later than 45 days after the end of the report month. FERC uses the data to conduct authorized fuel reviews, rate investigations, and monitor changes and trends in the electric wholesale market. Other government agencies use the data to track the supply, disposition, and fuel prices on a regional and national basis and conduct environmental assessments. Others use the data to assess market competitiveness. As needed. FERC uses the information to time period. determine whether a facility meets the necessary requirements and is entitled to various PURPA benefits. Publicly available. On or before April 30th for each preceding year. FERC collects this information to monitor public utilities’ interlocking directorates for possible conflicts of interest. FPA section 305, as amended, by section 211 of PURPA. Information on the 20 largest purchasers of electric energy. FPA section 205 (f), as amended, by section 208 of PURPA. This section authorizes the interrogatory established in Form 580 to take place not less frequently than every 2 years. Filed by jurisdictional public utilities or public utility holding companies engaged in the generation, transmission, and sale of electric power to report 20 of the largest purchases of electric energy. Lists customers and their business addresses if they were 1 of the top 20 largest purchasers of electric energy, measured in kilowatt hours sold, for purposes other than resale, during any of 3 preceding calendar years. Collects information from jurisdictional public utilities that own or operate power plants generating 50 megawatts or greater capacity. Information on fuel cost and cost recovery practices under fuel adjustment clauses in cost-based rates. Publicly available. On March 1 of the year following the reporting. Used to identify large purchasers of electric energy and possible conflicts of interest. Publicly available. Filed biennially on June 1st for preceding calendar period. Used to review public utility’s fuel purchase and cost recovery practices under fuel adjustment clauses in cost-based. Used to evaluate fuel costs in individual rate filings, to supplement periodic utility audits, and to monitor changes and trends in the electric wholesale market. Used by DOE’s EIA to study various aspects of coal, oil, and gas transportation rates. Used by electric market participants and the public to assess the electric marketplace during the transition to a competitive marketplace. FPA sections 202, 207, 210, 211, 212, and 213, as amended, and sections 4, 304, 309, and 311 of the same act. Collects information from any public utility or group of public utilities operating as a control area that has a peak load greater than 200 megawatts based on energy for load. The information collected allows FERC to analyze power system operations in the course of its regulatory functions. The purpose of these analyses is to estimate the effect of changes in power system operations that result from the installation of a new generating unit or plant, transmission facilities, and energy transfers between systems and/or new points of interconnections. The analyses also serve to correlate rates and changes; assess reliability and other operating attributes in regulatory proceedings; monitor market trends and behaviors; and determine the competitive impacts of proposed mergers, acquisitions, and dispositions. Generating plants included in the reporting control area; control area monthly peak demand; control area net energy for load and peak demand sources by month; adjacent control area interconnections; control area scheduled and actual interchange; planning area demand and forecast summer and winter peak demand and annual net energy for load; and control area hourly system lambda data. Publicly available. On or before June 1 of each year for the preceding calendar year. Used to monitor control area planning hourly demand, forecast summer and winter peak demand, and annual net energy for load. FPA section 213 (b), as amended, by the Energy Policy Act (Public Law 102-486). Section 213 (b) requires FERC to collect annually from transmitting utilities sufficient information about their transmission systems to inform potential transmission customers, state regulatory authorities, and the public of available transmission capacity and constraints. Provides information to potential transmission customers, FERC, state regulatory authorities, and the public of potential transmission capacity and known constraints. Potential transmission capacity and known constraints. FPA section 205(c). Provides contract and power sales data per Order 2001 issued on April 25, 2002. Public utilities are required to electronically file Electric Quarterly Reports summarizing the contractual terms and conditions in their agreements for all jurisdictional services (including market-based power sales, cost-based power sales, and transmission service) and transaction information for short- term and long-term market-based power sales and cost-based power sales during the most recent calendar quarter. Lists all contracts in effect and all power sales made during the previous quarter. Data are now restricted and designated as critical energy infrastructure information. On or before April 1 of each year for the preceding calendar year. FERC uses the information to facilitate and resolve transmission disputes brought before it. State and federal regulatory agencies use the information as a part of their oversight functions. Potential transmission customers use the information to determine transmission availability to or from wholesale electric power purchasers and sellers. Publicly available. For the period from January 1 through March 31, file by April 30; for the period from April 1 through June 30, file by July 31; for the period July 1 through September 30, file by October 31; and for the period October 1 through December 31, file by January 31. Information is available to the public in a variety of formats. It is used as an electronic repository of all jurisdictional contracts, to fulfill the FPA requirements to have all rates on file, and to provide price data for market oversight purposes. Federal Energy Administration Act (Public Law 93-275) and the DOE Organization Act (Public Law 95-91). These two laws require EIA to carry out a centralized, comprehensive, and unified energy information program. EIA is mandated to collect, evaluate, assemble, analyze, and disseminate information on energy resource reserves, production, demand, technology, and related economic and statistical information. Collects information on regional electricity supply and demand projections for a 5-year advance period and provides information on the transmission system and supporting facilities. This information is used to assess the adequacy of energy resources to meet near and longer-term domestic demands. The information reported includes (1) peak demand and energy for the preceding year and 5 future years, (2) existing and planned generating capacity and the same for demand, (3) scheduled capacity purchases and sales, (4) bulk electric transmission system maps, and (5) existing and proposed transmission lines. Publicly available, except for information on plant location (longitude and latitude) and tested heat rate. Each NERC Regional Council should file by April 1 and after review, NERC should file the Form EIA-411 by June 30. Used to monitor the current status and trends of the electric power industry and to evaluate the future of the industry. Primary publication—Electric Power Annual EIA is required to provide company-specific data to the Department of Justice, or to any other federal agency when requested for official use, which may include enforcement of federal law. The information may also be made available, upon request, to another component of DOE, to any committee of Congress, the General Accounting Office, or to congressional agencies authorized by law to receive such information. A court of competent jurisdiction may obtain this information in response to an order. Same as above. Collects information on accounting, plant statistics, and transmission data. The information reported includes (1) identification, (2) electric balance sheet, (3) electric income statement, (4) electric plant, (5) taxes, tax equivalents, contributions, and services during year, (6) sales of electricity for resale, (7) electric operation and maintenance expenses, (8) purchased power and power exchanges, (9) electric generating plant statistics, (10) existing transmission lines, and (11) transmission lines added within last year. Same as above. Collects information for DOE to monitor electric utility system emergencies. The information reported includes the type of emergency, cause of incident, and actions taken. Publicly available, except for Schedule 9, lines 9 through 34, for unregulated entities. Accounting data (Schedules 1 through 8) of the Form EIA-412 to EIA within 4 months following the end of the financial reporting year. All reports, including Schedules 9 through 11, for the given calendar year must be submitted by April 30. Used to compile statistics on the financial status of the industry and to develop EIA forecasting models. Primary publications—Electric Power Annual, DOE/EIA-0348; State Energy Price and Expenditure Report, DOE/EIA-0376; and Annual Energy Outlook, DOE/EIA-0383. EIA is required to provide company-specific data to the Department of Justice, or to any other federal agency when requested for official use, which may include enforcement of federal law. The information may also be made available, upon request, to another component of DOE, to any committee of Congress, the General Accounting Office, or to congressional agencies authorized by law to receive such information. A court of competent jurisdiction may obtain this information in response to an order. Publicly available, except for the information reported on Schedule 1, lines 4,5,6,7, and 8. As needed. As needed. DOE uses the information as the basis for determining appropriate federal action to relieve an electrical energy supply emergency. Primary publication—Electric Power Monthly, DOE/EIA-0226. Same as above. Collects information on cost and quality of fossil fuels delivered to U.S. electric plants. Delivered price of fuel by fossil fuel type and contract, contract type and end date, quality of fuel (heat content, sulfur and ash content), and volume delivered. Publicly available, except for fuel cost. Filing to be completed within 45 days of the close of the business month. Monthly/ annually. With the exception of a handful of state agency reports, the FERC-423 and the EIA-423 are the only timely public sources of information of the price of fuel delivered to electric generating plants. Public agencies and private analysts seeking to understand the current and historical fuel components of power prices and generating plant operating costs use the data widely. Data from this form and the FERC 423 appear in the EIA publications—Electric Power Monthly, Electric Power Annual, Monthly Energy Review, and Annual Energy Review. EIA is required to provide company-specific data to the Department of Justice, or to any other federal agency when requested for official use, which may include enforcement of federal law. The information may also be made available, upon request, to another component of DOE, to any committee of Congress, the General Accounting Office, or to congressional agencies authorized by law to receive such information. A court of competent jurisdiction may obtain this information in response to an order. Same as above. Collects information on the design and operations of organic-fueled or combustible, renewable, steam- electric plants, regardless of ownership status, which have a total existing or planned generator rating of 10 megawatts and above (excluding nuclear power plants). The information reported includes (1) identification, (2) plant configuration, (3) plant information (a) annual byproduct disposition and useful thermal output, (b) financial information, (4) boiler information (a) fuel consumption and quality, (b) air emission standards, (c) design parameters, (d) nitrogen oxide emission controls, (5) generator information, (6) cooling system information (a) annual operations, (b) design parameters, (7) flue gas particulate collector information, (8) flue gas desulfurization unit information (a) annual operations, (b) design parameters, and (9) stack and flue information— design parameters. Publicly available, except for information relating to plant locations (longitude and latitude). To be submitted no later than April 30 following the close of the reporting year. Data from this form appear in the EIA publications—Electric Power Annual, Annual Energy Review, and Carbon Dioxide Emissions from the Generation of Electric Power in the United States. EIA is required to provide company-specific data to the Department of Justice, or to any other federal agency when requested for official use, which may include enforcement of federal law. The information may also be made available, upon request, to another component of DOE, to any committee of Congress, the General Accounting Office, or to congressional agencies authorized by law to receive such information. A court of competent jurisdiction may obtain this information in response to an order. Same as above. Collects information on the retail sales and revenue from approximately 400 utilities and other energy service providers that have sales to end-user customers. Retail sales of electricity by end-user category, revenue, megawatt hours, and numbers of customers. Same as above. Collects information on the status of existing and planned power plants in the United States, including those scheduled for initial commercial operation within 5 years of filing this report. Also tracks planned upgrades to existing power plants. Generating unit name, ownership, operator, location, cogeneration status, and industry category if a cogenerator, prime mover type, nameplate and summer net generating capacity, initial commercial operating and retirement date, current unit status, tested heat rate, fuel sources, fuel delivery transportation mode, and FERC qualifying facility information for cogenerators. Publicly available, excluding energy service provider’s revenues, megawatt hours sold, and number of customers. Filing should be completed by the 10th working day, following the close of the business month. The EIA-826 is the only timely source of information on the price and volume of power sold to retail customers in the United States. Data from this form appear in the EIA publications—Electric Power Monthly, Electric Power Annual, Monthly Energy Review, and Annual Energy Review. EIA is required to provide company-specific data to the Department of Justice, or to any other federal agency when requested for official use, which may include enforcement of federal law. The information may also be made available, upon request, to another component of DOE, to any committee of Congress, the General Accounting Office, or to congressional agencies authorized by law to receive such information. A court of competent jurisdiction may obtain this information in response to an order. Publicly available, except for latitude and longitude of plant location and tested heat rate. On or before February 15 of the reporting calendar year. The EIA-860 is the primary source of information on the inventory of power plants in the United States. As such, it is widely used by public and private analysts interested in such topics as adequacy of power supplies and air pollution emissions. Data from this form appear in the EIA publications—Electric Power Annual and Annual Energy Review. EIA is required to provide company-specific data to the Department of Justice, or to any other federal agency when requested for official use, which may include enforcement of federal law. The information may also be made available, upon request, to another component of DOE, to any committee of Congress, the General Accounting Office, or to congressional agencies authorized by law to receive such information. A court of competent jurisdiction may obtain this information in response to an order. Same as above. Collects annual data from the universe of U.S. utilities and nonutility power producers on retail power sales and energy distribution. Collects information on system peak, net generation, energy balance, demand-side management, and the sales and distribution of electricity in the United States. Publicly available. By April 30, following the calendar year. The EIA-861 is the primary source of data for public and private analysts seeking information on electric power sales, revenues, and average prices. Data from this form appear in the EIA publications—Electric Power Monthly, Monthly Energy Review, Electric Power Annual, Annual Energy Outlook, Annual Energy Review, and Financial Statistics for Major U.S. Publicly Owned Electric Utilities. EIA is required to provide company-specific data to the Department of Justice, or to any other federal agency when requested for official use, which may include enforcement of federal law. The information may also be made available, upon request, to another component of DOE, to any committee of Congress, the General Accounting Office, or to congressional agencies authorized by law to receive such information. A court of competent jurisdiction may obtain this information in response to an order. EIA-906—Power Plant Report Same as above. Collects information on electric power generation, useful thermal output, fuel consumption, the heat content of fuels, and stocks of fossil fuels from electric power plants in the United States. Data on electric power generation, fuel consumption, useful thermal output, fuel heat contents, and stocks. Publicly available, excluding information on stocks at end of reporting period. For monthly respondents, submission is to be completed by the 10th working day, following the close of the month. For annual respondents, submission is to be completed by the last working day of January following the end of year. Monthly/ annually. The EIA-906 is the primary source of information on power plant generation, fuel consumption, and fuel stocks. Data are widely used by industry, state government agencies, trade associations, and federal agencies for energy analyses and policy- making decisions. Data from this form appear in the EIA publications—Electric Power Monthly, Electric Power Annual, Monthly Energy Review, Annual Energy, and Renewable Energy Annual. EIA is required to provide company- specific data to the Department of Justice, or to any other federal agency when requested for official use, which may include enforcement of federal law. The information may also be made available, upon request, to another component of DOE, to any committee of Congress, the General Accounting Office, or to congressional agencies authorized by law to receive such information. A court of competent jurisdiction may obtain this information in response to an order. According to EIA’s Electric Power Annual 2001 report, beginning with data collected from the year 2000, the Forms EIA 860A and 860B are obsolete. The infrastructure data collected on those forms are now collected on the Form EIA-860 and the monthly and annual versions of the Form EIA-906. Appendix II: Third-Party Data Sources Description FriedWire is an energy information provider, specializing in Web-based data collection and integration. Its Traffic Report is a real-time visual monitoring system covering electric power grid operations in North America. Its Powersurge is a real- time monitoring system for Northeastern and Canadian electric power markets. Its WestDesk provides similar information for western electric power markets. Its Analyst Edge is an on-line energy database created to support the needs of energy market analysts. Its Data Feed Service and Energy Data Warehouse provide updates of energy market information and historical information. Genscape provides current information related to generation and transmission of some fossil and nuclear power plants in the United States. Genscape guarantees accuracy of 90 percent or better based on its direct, physical monitoring of power plant outputs. Electric Power Research Institute has developed an on-line, Web-based display of power market transactions and includes information on schedules and congestion. The data are useful for transmission planning. Bloomberg’s PowerLines is a trade press publication providing electricity news. Bloomberg’s Professional Services provides current and historical data on regional electricity and gas markets, including spot and future prices, market commentary, plant outage information, and energy news. NERC provides real-time information on transmission constraints in the northeast. Its Flow Impacts Study Tool provides information about the real-time flow and expected flow for the next 36 hours for specific transactions. Open Access Technology International provides information on electricity transmission useful for scheduling and meeting electricity deliveries. EarthSat provides weather forecasts and historical weather data for selected cities. Energy Argus provides news concerning electricity and gas operations and prices. InterContinental Exchange provides information on over-the-counter energy transactions. PowerWorld Corporation’s Simulator is an interactive package designed to simulate high voltage power system operation. It gives an analyst a comprehensive look at issues surrounding electrical power flows in a transmission grid. Resource Data International Data Resources via Platt’s: (1) PowerDat (2) GasDat (3) NewGen (4) PowerMap (1) Historical data related to electric industry. (2) Historical data related to gas industry. (3) Database consists of new proposed generation. (4) Tool to generate maps, including transmission lines, gas pipelines, and generation. Through publications such as Megawatt Daily and Gas Daily, Platt’s provides daily energy news related to electric and gas issues. Cambridge Energy Research Associates provides various services related to regional electric, gas, and transmission issues. These include, for example, its North American Electric Power Advisory Service, which focuses on the future of the power sector and the forces affecting the market, prices, and emerging trends and technology. Other services include its North American Natural Gas Advisory Service, its Electric Transmission Advisory Service, and its Western North America Energy Advisory Service. In addition to the individual named above, Angelia Kelly, Dennis Carroll, Jose Martinez-Fabre, Jon Ludwigson, Jonathan McMurray, Frank Rusco, and Barbara Timmerman made key contributions to this report.
|
The ongoing transition (or restructuring) of electricity markets from regulated monopolies to competitive markets is one of the largest single industrial reorganizations in the history of the world. While information is becoming more critical for understanding how well restructuring is working, there are troubling indications that some market participants deliberately misreported information to manipulate prices. GAO was asked to describe (1) the electricity information collected, used, and shared by key federal agencies in meeting their primary responsibilities and (2) the effect of restructuring on these federal agencies' collection, use, and sharing of this information. Federal agencies collect, use, and share a wide variety of electricity-related information to carry out their respective missions. Federal agencies have three principal sources of information: (1) routine formal data collection instruments sent to industry participants to report on operations and other industry-related activities, (2) third parties such as energy news services that package federally collected information as well as collect original information some of which reflects current market conditions, and (3) individual companies under investigation. Agencies use the information that they collect to carry out their respective missions--ranging from Federal Energy Regulatory Commission's (FERC) monitoring of electricity markets to Energy Information Administration's dissemination of information about the electricity sector and Environmental Protection Agency's pollution monitoring. Agencies share electricity-related information through a variety of means, such as using the Internet to distribute published reports and access their databases, interagency meetings, and other means. In addition, most federally collected information is made publicly available, although it is sometimes subject to delayed release or released in aggregated form in order to protect business-sensitive information. Restructuring has substantially changed the collection, use, and sharing of electricity information at some agencies and has exposed gaps in the federal government's collection of this information. Restructuring has affected FERC dramatically by changing how FERC performs its mission of assuring just and reasonable prices and by shifting its focus from periodic review of cost information to monitoring current market conditions. To monitor these conditions, FERC needs to access market information on wholesale transactions; however, no federal agency, including FERC, has access to complete and timely information on electricity markets and market participants, exposing gaps in key information. Such information gaps exist primarily because FERC is limited in its authority to collect information for full and effective market oversight and it lacks specific authority to collect current information which may lead to market participants challenging these collection activities. For example, FERC authority does not generally extend to non-jurisdictional entities such as the power marketing administrations, other non-utilities, and North American Electric Reliability Council. As long as these information gaps persist, FERC will be unable to oversee electricity markets in a comprehensive manner. Restructuring's effects on the sharing of electricity information, coupled with recent national security concerns, have highlighted the sensitive nature of some information that federal agencies collect or need. Because of the importance of having timely, reliable, and complete information, we are recommending that FERC take action to resolve its information gaps. As part of this action, we are recommending that FERC present its findings to the Congress because information-related issues--raised by restructuring--may require Congressional action to ultimately resolve.
|
GPRA is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. Under GPRA, annual performance plans are to clearly inform the Congress and the public of (1) the annual performance goals for agencies’ major programs and activities, (2) the measures that will be used to gauge performance, (3) the strategies and resources required to achieve the performance goals, and (4) the procedures that will be used to verify and validate performance information. These annual plans, issued soon after transmittal of the President’s budget, provide a direct linkage between an agency’s longer-term goals and mission and day-to-day activities. Annual performance reports are to subsequently report on the degree to which performance goals were met. The issuance of the agencies’ performance reports, due by March 31, represents a new and potentially more substantive phase in the implementation of GPRA—the opportunity to assess federal agencies’ actual performance for the prior fiscal year and to consider what steps are needed to improve performance, and reduce costs in the future. NASA’s mission encompasses human exploration and development of space, the advancement and communication of scientific knowledge, and research and development of aeronautics and space technologies. Its activities span a broad range of complex and technical endeavors—from investigating the composition, evaluation, and resources of Mars; to working with its international partners to complete and operate the International Space Station; to providing satellite and aircraft observations of earth for scientific and weather forecasting purposes; to developing new technologies designed to improve air flight safety. This section discusses our analysis of NASA’s performance in achieving its selected key outcomes and the strategies the agency has in place to achieve unmet performance goals and measures in the future. In discussing these outcomes, we have also provided information drawn from our prior work on the extent to which the agency provided assurance that the performance information it is reporting is credible. The performance report indicated that NASA made progress toward achieving its key outcome of expanding the scientific knowledge of the Earth system. NASA reported that all performance targets for this outcome were met, except two. However, NASA provided reasonable explanations for not meeting them in fiscal year 2000. For example, NASA reported that a lack of science quality spectroradiometer ocean color data prevented one of the targets from being fully achieved, and difficulties with an international partner prevented the other target, “Launch the NASA-National Center for Space Studies Jason-1 mission,” from being achieved. The NASA Advisory Council provided an independent evaluation of NASA’s fiscal year 2000 performance and the evaluation was included in the performance report. Its evaluation of performance related to this outcome was positive, indicating the failure to launch the Jason-1 spacecraft as the most significant shortfall. However, the Council concluded that many of the performance targets across all of NASA’s Enterprises were too vague and did not sufficiently relate to the actual programs being implemented. The Council did not identify which targets it viewed as vague in its report, but it recommended that the targets be better written and that NASA communicate to the public the reason the metric or program is important. In response to the Council’s recommendation, NASA added statements to performance targets that explain why the performance results are meaningful. Regarding data credibility, NASA disclosed the methods used for verifying and validating the data and the data sources for each performance target associated with this outcome in the fiscal year 2000 report. This is an important step toward providing confidence that performance results are credible. However, for many performance targets, NASA did not discuss limitations in the data and steps it would take to correct them. A recent NASA Office of Inspector General (OIG) report states that beginning with the fiscal year 2002 final performance plan, NASA will discuss anticipated data limitations. Also, in some cases, NASA did not clearly address how the data was validated. For example, the verification and validation narrative for the performance target, “Continue the ocean color time with 60 percent global coverage every four days—a 35 percent improvement over fiscal year 1999,” reads as follows. “The two-day coverage is required to account for the losses due to the tilt maneuver of the sensor and interorbit gaps. When clouds are taken into consideration, the coverage is reduced to 50-60 percent. The Sea-viewing Wide Field-of-view Sensor Project has increased the Global Area Coverage data beyond that expected by eight percent by collecting data to higher latitudes than planned on all orbits. However, pole-to-pole coverage each day is not possible since data at low Sun angles are not scientifically useful for ocean color research. The Moderate Resolution Imaging Spectroradiometer instrument aboard Terra is beginning to supply additional data to meet to meet the ocean color requirement.” “Salinities in the ocean are typically 33-32 PSU. In 1998, they got +/- 1 PSU, so the target for a 10x improvement would be 0.1 PSU. Due to pointing error on the plane L-Band, radiometer accuracy is adversely affected. From a satellite sensor this error is much reduced. The airborne results coupled with theoretical studies now show that a monthly average of sea surface salinity, at a resolution of 1degree latitude x 1 degree longitude can be produced with an accuracy of <0.1-0.2 PSU, which will meet the target. Important to this analysis is getting the sea surface temperature right, and also getting sea surface roughness from scatterometers. The monthly data product is suitable for ocean circulation studies. It will also be possible to produce a weekly data product, more for meteorological use.” Regarding its plans for achieving the two unmet targets for this outcome, NASA established reasonable strategies and time frames. It reported that progress was significant in one of the performance targets (footnote 5) and that full achievement of that target, anticipated in fiscal year 2001, is dependent on the “availability of valid moderate resolution imaging spectroradiometer data.” The report noted that the planned delivery of new processing software in November 2000 was expected to improve data quality to a level sufficient for an initial merging of SeaWiFs and MODIS oceans products. Achievement of the other target, “Launch the NASA- National Center for Space Studies Jason-1 mission,” is also anticipated in fiscal year 2001. Lastly, NASA credits the contribution of the other agencies for the successful achievement of performance targets related to this outcome. For example, for the achieved target, “Demonstrate the utility of spaceborne data for flood plain mapping with the Federal Emergency Management Agency (FEMA),” the report credits FEMA, the Army Corps of Engineers, and NASA for conducting cooperative demonstration projects to evaluate NASA and commercially provided digital topographic and image-based information products to re-map flood plains. The performance report indicated that the agency made some progress toward achieving its key outcome of expanding the commercial development of space. Over half of the performance targets that we assessed for this outcome were reported as having been met. For example, NASA reported that it achieved its targets to (1) promote privatization and commercialization of space shuttle payload operations through the transition of payload management functions by fiscal year 2000 and (2) establish up to two new commercial space centers. NASA provided clear and reasonable explanations for targets that were not met. For example, NASA reported that its performance target to promote privatization of space shuttle operations and reduce civil service requirements for operations by 20 percent in fiscal 2000 was not met following the agency’s decision to hire additional staff to ensure that safety would not be compromised for space shuttle missions. In August 2000, we reported that several internal NASA studies had shown that the agency’s space shuttle program’s workforce had been affected negatively by NASA’s downsizing, much of which occurred after 1995. NASA reported that its performance target to complete small payload focused technologies and select concepts for flight demonstration of a reusable first stage was not met because the agency decided to terminate this activity once it was clear that the cost objectives could not be met. Furthermore, the performance targets for the X-33 and X-34 programs— which sought to develop and demonstrate technologies needed for future reusable spacecraft in partnership with private industry—were not met since they were not competitively selected for additional funding. The NASA Advisory Council had concerns about this outcome. In particular, the Council noted that efforts planned under the new Space Launch Initiative appear to be elusive, at best. The Council also stated that this highlights the time-lapse between definition and evaluation of specific (and critical) performance targets, but the Council did not provide further elaboration. NASA did not provide strategies and time frames for achieving the unmet target of pursuing the commercial marketing of space shuttle payloads by working to allow the space flight operations contractor to target two reimbursable flights, one in fiscal year 2001 and one in fiscal year 2002. NASA stated the target remains feasible, but no reimbursable flights in fiscal 2001 and fiscal year 2002 are planned due to policy limitations impeding the marketing process. Regarding strategic human capital management, NASA set one related target to promote privatization of space shuttle operations and reduce civil service resource requirements for operations by 20 percent (from the fiscal year 1996 full-time equivalent levels) in fiscal year 2000. However, this target was not met since NASA had decided to end its downsizing efforts. The performance report indicated that NASA’s progress toward achieving the key outcome of deploying and operating the space station safely and cost-effectively was limited with respect to achievement of the agency’s planned performance targets. Since the key outcome is not included in the report as a specific goal or objective, we based our assessment of it on a related objective in the report. The related objective is to deploy and operate the space station to advance scientific, exploration, engineering, and commercial objectives. All of the performance targets for this objective are associated with a NASA launch, except one, and none was achieved as planned for fiscal year 2000. The explanations for not achieving these targets were reasonable. For the launch-dependent targets, NASA reported that a schedule slip caused by Russian Proton failures and Service Module launch delays slowed down the entire space station assembly sequence. This, in turn, prevented NASA from achieving the launch-dependent targets in fiscal year 2000. NASA also reported that nonachievement for the one remaining target, “Complete the production of the X-38 first space flight test article in preparation for a Shuttle test flight in 2001,” was due to budget reductions. The NASA Advisory Council’s report indicated that the ISS program had a “productive year” after the schedule slip caused by the Russian Proton rocket failures, but it did not provide further elaboration. In the fiscal year 2000 performance report, NASA identified reports that we and NASA’s OIG issued in fiscal year 2000 that addressed space station cost overruns and other issues. From the reports, NASA briefly summarized some of the concerns and corrective actions it agreed to take in relation to these issues, including space station cost growth. However, the agency did not set performance measures that more directly address cost growth, despite drastic increases in space station costs over the past several years and recent agency projections of potential cost overruns in excess of $4 billion. Furthermore, the issue of space station growth has been a long-standing problem. Although the NASA Advisory Council did not comment on the issue of cost-control measures for the space station in its evaluation of NASA’s performance, we continue to believe—as we have reported in the past—that NASA should develop performance measures that directly address space station cost-control issues, including risk mitigation and contingency planning activities. NASA’s strategies and time frames for achieving the unmet targets were reasonable. For example, as of April 2001, NASA had launched four of the missions that were delayed in fiscal year 2000. The other two unachieved missions are also anticipated for launch in 200l. For the one target that was affected by budget reductions, “Complete the production of the X-38 first space flight test article in preparation for a Shuttle test flight in 2001,” the report stated that it would be achieved in fiscal year 2001. The shuttle test flight that was planned for September 2001 was extended to mid-2002. For the selected key outcomes, this section describes major improvements or remaining weaknesses in NASA’s fiscal year 2000 performance report in comparison with its fiscal year 1999 report. It also discusses the degree to which the agency’s fiscal year 2000 report addresses concerns and recommendations by the Congress, GAO, NASA’s OIG, and others. NASA’s portrayal of its verification and validation efforts applicable to all outcomes is an improvement over the fiscal year 1999 report, and it provides greater confidence that the performance results are credible. In our review of NASA’s fiscal year 1999 report, we criticized NASA for not describing procedures used to verify and validate performance information and addressing data limitation issues in the data. Unlike the prior year report, the fiscal year 2000 report provides a description of the methods used for verifying and validating the data as well as the data sources for each performance target. However, NASA can further improve the usefulness of the performance information by highlighting the limitations in the data and including steps it will take to correct them. This can be done by adding a separate data limitations narrative for each performance target and by using terminology such as “none” where there are no limitations in the performance data. Additionally, for a few performance targets, the reliability and validity of the performance information would be strengthened if the related verification and validation narratives were conveyed in a manner that would more clearly (1) demonstrate actual validation of the performance and (2) identify the validation methods. This is particularly true for the Earth Science Enterprise. This can be done by writing the narratives in a more convincing tone and in less technical language to enhance understanding of the validation approach. NASA’s OIG report commended NASA for a significant improvement in the reporting of actual performance for fiscal year 2000. The OIG report found weaknesses in the accuracy and reliability of reported performance for 4 of 23 selected performance targets it reviewed in the fiscal year 2000 performance report. (NASA’s performance report includes a total of 211 performance targets). The OIG report indicated that based on OIG recommendations, NASA management made the necessary corrections or clarifications before issuing the fiscal year 2000 report. For the most part, the performance targets continue to be output measures. Explanations are added to the performance targets as to why the performance results are meaningful. Generally, these explanations help to better understand the linkage between the targets and results. In our review of NASA’s fiscal year 1999 performance report, we noted that the continued use of output measures burdens the agency by requiring it to continuously demonstrate the linkages between program efforts and results and to make improvements to strengthen such linkages. Moreover, in its evaluation of NASA’s fiscal year 2000 performance, the NASA Advisory Council asked NASA to portray its performance in a way that is more understandable to the public. As previously mentioned, NASA’s fiscal year 2000 report adds explanations of why the performance results are meaningful. NASA acknowledges that developing annual outcome- related performance metrics for multiyear research and development programs is particularly challenging since these programs may not be mature enough to deliver outcome results for several years. The report notes that the stated objectives of programs within the agency are long- term in character. However, NASA also states in the report that it would continue to strive to meet the challenge of developing science and technology metrics that are outcome-oriented and useful in demonstrating how these outcomes benefit the public. Lastly, in our review of the fiscal year 1999 report, we suggested that NASA document in its performance plans and reports the rationale for newly established performance targets to clarify the reasons for such targets. We had noted that while many of NASA’s performance targets were new each year, there was no stated basis for the changes. In its fiscal year 2000 report, NASA provides the rationale that in many cases, new targets are developed in response to program changes or as a result of experience gained in the performance planning process. NASA also includes charts at the end of each Enterprise and Crosscutting Process section that provide a trend assessment when a fiscal year 2000 target has a corresponding fiscal year 1999 target. Newly developed targets that have no corresponding fiscal year 1999 target to facilitate an assessment are characterized as “new target.” GAO has identified two governmentwide high-risk areas: strategic human capital management and information security. NASA’s performance report does not fully explain NASA’s progress in resolving human capital challenges. The report states that NASA has begun to focus on workforce renewal and revitalization, but it does not elaborate on strategies for undertaking this effort or address human capital challenges in other key areas. In addition, the report does not address the challenge of information security. Doing so is important for NASA. In 1999, we reported that the agency lacked an effective agencywide security program and that tests we conducted at one of NASA’s 10 field centers found that mission-critical information systems were vulnerable to unauthorized access. In addition, we have identified three major management challenges facing NASA: (1) correcting contract management weaknesses, (2) controlling International Space Station costs, and (3) effectively implementing the faster-better-cheaper approach to space exploration projects. We found that NASA’s report addresses the problems of contract management and implementing the faster-better-cheaper approach. With respect to contract management, it is important to note that until NASA’s Integrated Financial Management System—-which is central to providing effective management and oversight over its procurement dollars—is operational, performance assessments relying on cost data may be incomplete, and full costing will be only partially implemented. As we discussed under outcomes, NASA did not address the challenge of controlling space station costs. As we reported in January 2001, the International Space Station program continues to face cost-control challenges. As with contract management, until NASA’s Integrated Financial Management System is operational, NASA may lack the cost information needed to control space station costs. As agreed, our evaluation was generally based on the requirements of GPRA, the Reports Consolidation Act of 2000, guidance to agencies from the Office of Management and Budget (OMB) for developing performance plans and reports (OMB Circular A-11, Part 2), previous reports and evaluations by us and others, our knowledge of NASA’s operations and programs, our identification of best practices concerning performance planning and reporting, and our observations on NASA’s other GPRA- related efforts. We also discussed our review with NASA officials and with NASA’s OIG. The agency outcomes that were used as the basis for our review were identified by the Ranking Minority Member of the Senate Committee on Governmental Affairs as important mission areas for NASA and do not reflect the outcomes for all of NASA’s programs or activities. The major management challenges confronting NASA, including the governmentwide high-risk areas of strategic human capital management and information security, were identified in our January 2001 performance and accountability series and high risk update, and by NASA’s OIG in December 2000. We did not independently verify the information contained in the performance report, although we did draw from other GAO work in assessing the validity, reliability, and timeliness of NASA’s performance data. We conducted our review from April 2001 through June 2001 in accordance with generally accepted government auditing standards. In written comments on a draft of our report, NASA said that it had no issues with the report. NASA stated that as it develops its next performance plan, it is looking into the decreased use of output metrics, so as to focus more on outcomes. NASA also stated that it is reviewing its coverage of such areas as the International Space Station and information security in the performance plan. NASA’s comments are reproduced in appendix II. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; the NASA Administrator; and the Director, Office of Management and Budget. Copies will also be made available to others on request. If you or your staff have any questions, please call me at (202) 512-4841. Key contributors to this report were Richard J. Herley, Shirley B. Johnson, Charles W. Malphurs, Cristina T. Chaplain, John de Ferrari, Diane G. Handley, and Fannie M. Bivins. The following table identifies the major management challenges confronting NASA, including the governmentwide high-risk areas of strategic human capital management and information security. The first column of the table lists the management challenges that we and/or NASA’s Office of Inspector General (OIG) have identified. The second column discusses what progress, as discussed in its fiscal year 2000 performance report, NASA made in resolving its challenges. We found that NASA’s performance report discusses the agency’s progress in resolving many of its challenges but does not discuss progress in resolving the governmentwide challenge of information security. In addition, the report does not address the major management challenge: The Need to Control International Space Station Development and Support Costs.
|
GAO reviewed the National Aeronautics and Space Administration's (NASA) fiscal year 2000 performance report to assess the agency's progress in achieving selected key outcomes important to NASA's mission. The selected key outcomes are to (1) expand scientific knowledge of the Earth system, (2) expand the commercial development of space, and (3) deploy and operate the International Space Station safely and cost effectively. NASA reported mixed progress in achieving these key outcomes. In general, NASA's strategies for achieving unmet performance targets for theses outcomes are clear and reasonable. NASA achieved most targets related to expanding knowledge of the Earth system. However, its progress in other areas was more limited. NASA has made improvements in its fiscal year 2000 performance report in comparison to its fiscal year 1999 performance report. Specifically, NASA describes its verification and validation efforts and discloses its data sources for each performance target. NASA's report partially addressed the governmentwide high-risk area of strategic human capital management but not the area of information security. GAO has previously found that NASA lacks an effective agencywide security program. NASA's report only addressed two of the three critical management challenges: (1) correcting weaknesses in contract management and (2) effectively implementing the faster, better, cheaper approach to space exploration projects. It did not address the challenge of controlling space station costs.
|
The Coal Act established beneficiary eligibility requirements, a standard for covered benefits, and separate boards of trustees to oversee the CBF and the 1992 Benefit Plan. For both funds, the act requires coal companies to pay premiums for beneficiaries and their dependents, but the annual premium amount, the method for adjusting the premium each year, and other financing arrangements are quite different for each fund. Since the Funds were established in 1993, coal companies have challenged several provisions of the law. Court decisions in favor of former employers have reduced the premium contributions paid by the companies to the CBF. Although the CBF’s financing was originally expected to be adequate, the CBF has incurred an annual operating deficit in each year since 1997, prompting Congress to make special appropriations in 1999 and 2000 to maintain its solvency. In contrast, the 1992 Benefit Plan has had an annual operating deficit in only one year (2000) since its inception. The Coal Act limited coverage under the Funds to retired coal miners, their spouses and dependents who were eligible for benefits under former UMWA retiree benefit plans. There were approximately 115,000 beneficiaries in 1993. The number of beneficiaries has declined each year as individuals died and dependent minors reached 22 years of age and no longer qualified for coverage (the current population is declining by approximately 9 percent per year). In 2001, the Funds provided health benefits to about 61,000 beneficiaries. Approximately 70 percent of beneficiaries are female and the median age is over 78. Most of the Funds’ beneficiaries are eligible for Medicare (89 percent) and others are from 55 to 64 years of age and nearing eligibility (7 percent). Most of the Funds’ beneficiaries (62 percent) live in rural or nonmetropolitan urban areas. More than three quarters of the beneficiaries live in five states: West Virginia (32 percent), Pennsylvania (19 percent), Kentucky (12 percent), Virginia (8 percent), and Ohio (6 percent). In 2000, the median income of the Funds’ beneficiaries ($17,100) was similar to the median income of all Medicare beneficiaries ($18,000). The Coal Act specified that “to the maximum extent feasible,” the Funds’ coverage be “substantially the same as” the coverage provided under the UMWA retiree health plans they replaced, provided that premium income is sufficient to cover payment rates to providers. Thus, the Funds’ benefit packages reflect the outcome of prior agreements between UMWA and coal companies. The benefits include coverage for inpatient and outpatient hospital care, physician services, prescription drugs, home health services, SNF care, mental health care, and durable medical equipment such as ventilators and wheelchairs. All of the Funds’ beneficiaries receive the same package of benefits regardless of their entitlement status (retiree, spouse, or dependent) or their eligibility for Medicare. For Medicare-eligible beneficiaries, the Funds pay Medicare’s required cost sharing (coinsurance, copayments, and deductibles) in addition to the cost of services included in the Funds’ benefit packages but not covered by Medicare, such as outpatient prescription drugs. Except for required copayments, the Funds pay the entire cost of covered services provided to beneficiaries who are not eligible for Medicare. There are separate boards of trustees for the CBF and for the 1992 Benefit Plan. The Coal Act stipulates that the CBF board consist of one individual designated by the Bituminous Coal Operators Association (BCOA) to represent employers in the coal mining industry, one individual jointly designated by the three employers with the greatest number of assigned beneficiaries, two individuals designated by UMWA, and three persons selected by the other board members. UMWA and BCOA each appoint two members to the board of the 1992 Benefit Plan. Some individuals serve as trustees for both the CBF and the 1992 Benefit Plan. The Coal Act established the Funds’ initial and ongoing financing structures. Both funds receive annual revenues from coal company premiums and Medicare payments. However, the CBF also received an initial transfer of assets from the 1950 UMWA Pension Plan, and has received some of the accumulated interest from the Abandoned Mine Reclamation fund (AML) since 1996. Together, these revenues pay for health care expenses and the associated administrative costs of the health plans, which include the cost of third-party contracts for claims processing and utilization review, general overhead, and legal representation in lawsuits brought by and against the Funds. The Coal Act requires certain coal and other companies to pay premiums on behalf of beneficiaries who are covered by the 1992 Benefit Plan or the CBF. However, the 1992 Benefit Plan and the CBF differ in how the annual premium amount is determined and the extent to which coal companies are responsible for beneficiaries. For the 1992 Benefit Plan, the Coal Act allows the premiums to be adjusted annually to cover changes in the cost of providing benefits. The trustees have historically set the premiums so that revenues will meet projected annual expenditures. Thus, premium adjustments reflect changes in medical prices or beneficiaries’ use of medical services. For 2002, the annual premium was about $4,437, or about 38 percent higher than the CBF annual premium. The Coal Act assigns financial responsibility for paying premiums to each eligible retiree’s most recent coal industry employer. If an employer has gone out of business, or the premium cannot otherwise be collected, the cost of affected 1992 Benefit Plan beneficiaries is shared by other coal companies that were signatories to a prior agreement between the industry and UMWA and that have either current or potentially eligible beneficiaries under the 1992 Benefit Plan. For the CBF, the Coal Act specifies a method for determining the premium to be paid by a company for each of its retirees and eligible dependents, and how the premium is updated. The premium is based on the cost of providing benefits under the UMWA’s retiree health plan during the period between July 1, 1991 through June 30, 1992. It is increased each year by the percentage change in general medical prices as measured by the medical component of the consumer price index. In 2002, the annual premium was about $2,725. The Social Security Administration (SSA) was charged with determining which company is financially responsible for each CBF beneficiary. In some cases, SSA was not able to assign a beneficiary to a responsible company. This occurred, for example, when a beneficiary’s former employer had gone out of business. In 2001, about 71 percent of CBF beneficiaries were assigned to companies that were responsible for paying premiums on their behalf. The CBF did not receive premium payments from coal companies or their successors for the 29 percent of beneficiaries who were unassigned. The Coal Act allows for transfers of accumulated interest from the AML, a federal fund financed by levies on coal extraction, to cover the projected costs of the CBF’s unassigned beneficiaries. Since 1996, transfers of interest from the AML to the CBF have helped to pay for costs associated with assigned beneficiaries. In 1999 and 2000, Congress made special appropriations to keep the CBF solvent. The AML moneys have not been used to support the 1992 Benefit Plan. The Funds are participants in a Medicare demonstration project that places them at financial risk for the cost of Medicare-covered services delivered to eligible beneficiaries. The extent of the Funds’ financial risk varies by type of service. The Funds assume partial risk for the cost of Medicare’s part A benefits that include coverage for inpatient hospital services and skilled nursing facility care. Annual spending for these services is compared to an expenditure target. If spending was less than the targeted amount, the difference is shared between the Funds and Medicare according to a predetermined formula. The same formula specifies how the cost of any spending in excess of the targeted amount is to be shared. The Funds assume full financial risk for the cost of Medicare- covered part B benefit that cover physician, hospital outpatient, and certain other services. Medicare pays the Funds a fixed monthly payment per beneficiary, known as a capitation payment, that is projected to cover the cost of these services. If the Funds’ spending on these services for eligible beneficiaries is less than Medicare’s capitation payments, the Funds may retain the difference. However, the Funds are financially responsible for any spending in excess of Medicare’s capitation payments. In recent years, the Funds spent less on Medicare-covered services than the combined total of the annual expenditure target and capitation payments from Medicare. In 1999, for example, this difference amounted to approximately $16 million, of which $4.4 million was retained by the Funds and $11.6 million was retained by Medicare. The Funds can use these retained moneys to help pay for services and items not covered by Medicare, such as outpatient prescription drugs. On July 1, 2001, the Centers for Medicare and Medicaid Services (CMS) renewed the demonstration project for an additional 3 years. At the same time, CMS agreed to include a new component in the demonstration project that will provide the Funds with additional revenue to help cover the cost of outpatient prescription drugs. Under the terms of the new demonstration component, Medicare will pay the Funds an amount equal to 27 percent of their expenditures on outpatient prescription drugs for Medicare-eligible beneficiaries. CMS estimates that the new demonstration component will result in an additional $135 million in Medicare payments to the Funds during the 3-year period. Court decisions in several lawsuits brought by coal companies have reduced the premium revenues available to the CBF and contributed to the financing challenge it faces. The cost of legal representation has also increased the CBF’s annual administrative costs. Since 1992, companies have filed over 50 lawsuits challenging specific aspects of the Coal Act’s implementation. One lawsuit challenged SSA’s calculation of the initial premium rate. As a result of the court decision in that case, premiums charged to companies were reduced by approximately 10 percent. In other lawsuits, companies have challenged some of SSA’s beneficiary assignment decisions. The effect of one Supreme Court decision was to reduce companies’ financial responsibilities thereby increasing the number of unassigned beneficiaries. Another case changed the status of several thousand beneficiaries from assigned to unassigned. The CBF will receive no further premiums from coal companies for all living beneficiaries who are now unassigned as a result of these cases, and transfers from the AML will have to increase to cover the health care costs of these additional unassigned beneficiaries. Furthermore, the CBF will need to refund the premiums it previously collected on behalf of any affected beneficiaries. The rise in health care expenditures during the 1990s, which prompted many private employers to reduce the health insurance benefits they provided to their employees or to require larger contributions from beneficiaries, also affected the expenditures of the Funds. From 1994 through 2000 the per capita cost of the CBF’s beneficiaries rose by 53 percent, an average annual increase of 7.3 percent, and the per capita costs of the 1992 Benefit Plan beneficiaries, who tend to be younger than CBF beneficiaries, increased by 28 percent, an average annual increase of 4.2 percent. Part of the rise in cost was due to higher medical prices. However, overall increases in the use of medical services and increases in the use of outpatient prescription drugs and other expensive services also pushed up per beneficiary costs. Although Medicare per capita costs rose by 26 percent during this period, in part due to rising utilization, the trend may have been magnified in the CBF because it serves a closed, and therefore aging, population. Per capita costs would be expected to grow faster among CBF beneficiaries relative to Medicare beneficiaries because older individuals tend to use more medical services than younger individuals and because the cost of outpatient prescription drugs, which are not covered by Medicare, have risen faster than other components of health care spending during this period. Unlike premiums in the 1992 Benefit Plan, CBF premiums have not kept pace with increases in the cost of services not covered by Medicare. The CBF premium update adjustment specified in the Coal Act only reflects changes in medical prices, which rose at an average annual rate of 3.6 percent from 1994 to 2000 while per capita spending increased at twice that rate. To date, Medicare payments have been sufficient to cover the cost of providing Medicare-covered services in both the CBF and the 1992 Benefit Plan because annual updates to Medicare’s payments reflect underlying changes in both prices and use of services. Similarly, AML funding for the non-Medicare costs of the CBF’s unassigned beneficiaries is based on projected costs and takes into account expected changes in both utilization and prices. In four areas—premium contributions, annual deductible, the cap on beneficiary out-of-pocket expenses, and coverage for SNF care—the Funds’ benefits are more generous than those benefits typically offered to retirees and workers by major manufacturing companies or to unionized hourly workforces in other companies. In addition, most aspects of the Funds’ outpatient prescription drug coverage are more generous than the coverage provided by other benefit plans. However, many features of the Funds’ health plans are similar to those offered in the comparison plans. In particular, the Funds’ coverage for hospital and physician services, which account for the majority of health care spending, is comparable to the coverage provided by the other plans. (Table 1 compares selected benefits of the Funds’ plans with those in plans offered to workers in manufacturing companies and to unionized hourly workers.) Eligibility requirements for retiree health plan coverage by the Funds are similar to those of other manufacturing employers. The Funds’ beneficiaries can qualify for retiree health benefits at age 62 with 5 years of service, or at age 55 with 10 years of service. Most retiree plans require a similar combination of minimum age and years of service to qualify for retiree health benefits. Retiree premium contribution. The Funds’ beneficiaries do not pay a premium beyond that required for Medicare part B, the optional part of Medicare. According to a study by Hewitt Associates, 61 percent of unionized companies require retired unionized hourly workers to pay a health insurance premium. A related study found that more than 92 percent of major manufacturing companies require retirees from salaried jobs to pay a health insurance premium. Deductible. The Funds’ beneficiaries are not responsible for an annual deductible. Beginning with the first covered service used, the Funds pay all but the copayment. In contrast, the average annual deductible for workers in large manufacturing companies is more than $260 for individuals and more than $615 for families. Cap on beneficiary out-of-pocket expenses. The Funds’ beneficiaries are responsible for copayments on each service used, up to an annual amount of $100 per family, excluding prescription drugs. Additional out-of- pocket expenses for covered prescription drugs are capped at $50 per family per year. The total cap of $150 is substantially less than the median cap of over $1,750 in plans offered to other unionized hourly workers. SNF coverage. The Funds’ beneficiaries are eligible for SNF care with no cost-sharing requirement and no limit on the number of covered days. In contrast, most employer-sponsored retiree plans do not offer SNF care. Those that do typically restrict the number of days covered, require cost sharing, or both. Outpatient prescription drug benefit. The Funds’ beneficiaries pay a $5 copayment per prescription and their annual out-of-pocket costs for covered prescription drugs are capped at $50. In contrast, many plans offered by manufacturing companies do not have deductibles but require beneficiaries to pay higher cost sharing requirements with no cap on out- of-pocket costs. (See table 2.) Some plans require beneficiaries to pay 20 percent of the cost of each prescription while others use multitiered copayment schedules that may, for example, require $5 for generic drugs, $10 to $15 for brand name drugs included in the health plan’s formulary, and $20 or more for nonformulary brand name drugs. Furthermore, 14 of the 17 companies we contacted that cover prescription drugs do not cap retirees’ out-of-pocket costs for outpatient prescription drugs. However, the Funds’ prescription drug benefit is more restrictive than those of some other retiree benefit plans, in that it generally limits coverage to generic versions of prescription drugs when generic versions are available. The Funds pay the entire cost of a drug, with the exception of the copayment, if a beneficiary uses a generic version of a prescription drug when one is available, unless his or her physician submits a written justification specifying that a particular brand is necessary. If the request is approved, the beneficiary is not charged an additional amount for the brand name product. Typically, about 40 such requests are received each month and about 30 percent of them are approved. Without approval, the Funds’ beneficiaries who use brand name drugs instead of generic equivalents, or who use off-formulary brand names instead of ones included on the formulary, must pay the full difference in price between the preferred and nonpreferred drug. This amount does not count toward the beneficiary’s $50 annual cap on prescription drug expenditures. Only 5 of the 17 companies we contacted that cover prescription drugs have similar mandatory generic drug use policies. The average annual health care cost of the Funds’ beneficiaries is approximately 29 percent higher than the average cost of demographically similar Medicare retirees with employer-provided insurance. The Funds’ beneficiaries also tend to use more health care services than Medicare beneficiaries of the same age and sex. The Funds’ beneficiaries appear to be in relatively poorer health, which may explain the differences in cost and service use. In 1999, the Funds spent an average of $9,732 on each beneficiary who was eligible for Medicare. This was $2,163, or 29 percent, higher than the estimated average health care cost of Medicare beneficiaries who live in the same counties where the Funds’ beneficiaries live, have similar demographic characteristics, and have employer-provided supplemental insurance. (See figure 1.) Approximately $1,345 (62 percent) of the $2,163 estimated cost differential is associated with increased use of Medicare- covered services while the remaining $818 (38 percent) is associated with additional benefits, such as prescription drug coverage, that are covered by the Funds. The beneficiaries of the Funds who are eligible for Medicare generally use more health care services than do similar Medicare beneficiaries nationwide. In 1999, the beneficiaries of the Funds had 22 percent more physician office visits, 51 percent more days in SNFs, 91 percent more days in the hospital, and 55 percent more days in hospice care than the national average for Medicare beneficiaries of the same age and sex. However, the Funds’ beneficiaries’ use of home health care was substantially below the average home health utilization rate among demographically similar Medicare beneficiaries. The health status of the Funds’ beneficiaries may explain some of the observed differences in health care costs and utilization. In 1999, the average beneficiary in the Funds reported his or her health status as fair or good. That same year, the average Medicare beneficiary with similar demographic characteristics reported his or her health status as good or very good. Several studies have found that individuals who report poorer health tend to use substantially more services than individuals who report better health. Thus, it is likely that some of the higher costs and utilization associated with the Funds’ beneficiaries is a result of their relatively poorer health. The Funds’ low cost-sharing requirements provide few financial barriers to care, which may also contribute to the cost differential. However, we cannot determine how much of the cost and utilization difference is attributable to health status differences, local practice patterns, or differences in benefit packages and cost sharing arrangements. The Funds’ trustees have stated that they are firmly committed to preserving the “benefits that were promised and guaranteed” to the retired miners and therefore their cost control efforts largely focus on making the Funds a more efficient manager and prudent purchaser of health care services. While many private employers have responded to rising health care costs by requiring their beneficiaries to contribute more to the cost of health insurance, either through higher premiums or increased copayments and deductibles, the trustees have chosen to make relatively few changes that would affect the Funds’ beneficiaries’ out-of-pocket expenses. According to the Funds’ representatives, the trustees have tried to deliver services more efficiently and negotiate lower prices from providers and suppliers. The Funds’ efficiency initiatives include a disease and case management program and the management of medical service use through prepayment claims and utilization review. Beneficiaries with health conditions such as diabetes or congestive heart failure receive care coordinated by the Funds’ disease management program. To help prevent unnecessary spending, the third-party administrator that processes the Funds’ claims reviews billing patterns to identify potential billing abuses or inappropriate payments and has also instituted other program integrity safeguards. The Funds’ efforts at being a prudent purchaser of care include a competitive bidding program for durable medical equipment suppliers, a range of initiatives designed to help control spending for prescription drugs, and arrangements with hospitals and physicians providers to accept Medicare rates as payments in full for all beneficiaries, including those who are not eligible for Medicare. The Funds have solicited competitive bids for durable medical equipment in an effort to obtain better pricing and have reduced the number of suppliers nationwide from several hundred to six. The Funds’ PBM, which administers the prescription drug benefit, has established a formulary, mandated the use of generic drugs when available, implemented a preferred product program, negotiated discounts, and initiated mail order pharmacy services. The Funds claim that these cost control efforts collectively have achieved millions of dollars in savings per year. The Funds’ officials have tried to maintain the established level of benefits and cost sharing for their beneficiaries even while health care costs have risen. For example, neither the copayments nor the cap on out-of-pocket expenditures for the Funds’ beneficiaries have been adjusted for inflation or otherwise modified since they were established. The Funds’ beneficiaries face no cost sharing after they reach their annual $100 cap on out-of-pocket expenses for covered services ($150 including outpatient prescription drugs). In contrast, other employers have reduced coverage for prescription drugs or other benefits, shifted retirees into managed care plans, or stopped offering retiree health benefits altogether in response to recent health care cost increases. From 1994 through 2000, the per capita health care costs of the CBF’s beneficiaries increased by 53 percent while those of the 1992 Benefit Plan’s beneficiaries increased by 28 percent. The Funds’ officials have taken steps to help control the cost growth. The Funds’ officials contend, however, that statutory requirements pertaining to coverage impede their ability to require beneficiaries to pay more for their health care. To cover rising health care costs, the 1992 Benefit Plan has increased the premiums charged to coal companies. This option is not available to the CBF because the Coal Act ties annual premium updates to a formula that accounts for inflation, but not to changes in the use of health care services. Consequently, Congress has had to provide the CBF with additional money in recent years to close the gap between its costs and revenues. These annual shortfalls are expected to continue into the future as the CBF’s beneficiaries grow older and require more medical services. In written comments on a draft of this report, the Funds emphasized the importance of the history of the Coal Act in understanding the Funds’ operations, provided additional detail on the health status of their beneficiary population, and stressed the breadth and success of their cost control efforts. The Funds also pointed out technical issues that we have incorporated, where appropriate. The Funds’ officials stressed that comparisons of their plans and beneficiaries with other plans and populations are misleading without a full appreciation of the history behind the 1992 Coal Act and the characteristics of their beneficiary population. Specifically, they emphasized that coal miners traded lower pensions for better health care benefits in their labor contracts. The Funds’ comments cited the 1990 Coal Commission Report conclusion that “retired miners are entitled to the health care benefits that were promised and guaranteed them and that such commitments must be honored.” They noted that the Funds’ beneficiaries have already contributed significantly to their health care benefits through the shifting of assets from their pension plans. The Funds stated that any comparisons of benefits with other groups are inappropriate because the plans’ benefits are a culmination of their history. The Funds also said that cost comparisons are misleading because their population is sicker than comparably aged men and women. Finally, the Funds emphasized their record of success in implementing a wide range of managed care and cost containment programs and claimed that these initiatives have realized substantial savings for the Funds and for Medicare and the U.S. Treasury. We acknowledge that the retired coal miners traded lower pensions for the promise of future health care benefits, and that this may be an important consideration when interpreting our benefit comparisons with packages offered by other manufacturing companies and companies with significant numbers of unionized workers. Our analysis finds that the Funds’ plans are generally comparable, but more generous in some dimensions and less so in others. Our cost comparison adjusts for all the demographic information used by Medicare to calculate the average cost per beneficiary, and acknowledges the differences in self-reported health status. Finally, as we have noted in the report, the Funds’ officials have adopted numerous cost-cutting initiatives and have a history of achieving savings against their Medicare targets. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to the UMWA Health and Retirement Funds and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions about this report, please call me at (202) 512-7119 or James C. Cosgrove, assistant director, at (202) 512-7029. Other major contributors to this report include Jim S. Hahn and Richard M. Lipinski. Medigap: Current Policies Contain Coverage Gaps, Undermine Cost Control Incentives, GAO-02-533T. Washington D.C.: Mar. 14, 2002. Retiree Health Insurance: Gaps in Coverage and Availability, GAO-02-178T. Washington, D.C.: Nov. 1, 2001. Retiree Health Benefits: Employee-Sponsored Benefits May Be Vulnerable to Further Erosion, GAO-01-374. Washington, D.C.: May 1, 2001. Additional Information Related to Analysis of the Administration’s Proposal to Ensure Solvency of the United Mine Workers of America Combined Benefit Fund, GAO/AIMD-00-308R. Washington, D.C.: Aug. 31, 2000. Financial and Legal Issues Facing the United Mine Workers of America Combined Benefit Fund, GAO/AIMD-00-280R. Washington, D.C.: Aug. 15, 2000. Analysis of the Administration’s Proposal to Ensure Solvency of the United Mine Workers of America Combined Benefit Fund, GAO/AIMD-00-267R. Washington, D.C.: Aug. 15, 2000. Prescription Drugs: Increasing Medicare Beneficiary Access and Related Implications, GAO/T-HEHS/AIMD-00-100. Washington, D.C.: Feb. 16, 2000. Private Health Insurance: Declining Employer Coverage May Affect Access for 55- to 64-Year-Olds, GAO/HEHS-98-133. Washington, D.C.: June 1, 1998. Retiree Health Insurance: Erosion in Retiree Health Benefits Offered by Large Employers, GAO/T-HEHS-98-110. Washington, D.C.: Mar. 10, 1998. Retiree Health Insurance: Erosion in Employer-Based Health Benefits for Early Retirees, GAO/HEHS-97-150. Washington, D.C.: July 11, 1997.
|
More than 100,000 retired coal miners and their spouses and dependents in 1992 faced a potential decrease in their employment-related health insurance coverage or loss of such coverage altogether. Some former employers had stopped mining coal or gone out of business and were no longer contributing to the United Mine Workers of America (UMWA) retiree benefit funds. To ensure that these individuals would continue to receive the health benefits specified in previous collective bargaining agreements reached with coal companies, often gained in exchange for lower pensions, Congress enacted the Coal Industry Retiree Health Benefit Act of 1992 (Coal Act). The Coal Act replaced the existing UMWA benefit funds with the Combined Benefit Fund (CBF) and the 1992 Benefit Plan. These funds' benefits requires less cost sharing by beneficiaries and provides more extensive coverage than benefit packages offered by the major manufacturing companies and companies with unionized workforces. However, the extent of coverage is generally comparable. The cost of health care for the funds' beneficiaries in 1999 was about 29 percent higher than for demographically similar Medicare beneficiaries with employer-sponsored insurance. The funds' officials have attempted to control costs largely through approaches that do not reduce or limit the benefits for beneficiaries, do not increase beneficiary cost-sharing requirements, or that have a minimal impact on beneficiaries.
|
SBA officials estimate that, nationally, there are more than 160,000 individuals living with spina bifida. The proportion of those individuals who are children of Vietnam veterans is unknown, due to a lack of data on the prevalence of spina bifida in this population. As of October 2013, there were 1,228 beneficiaries enrolled for coverage under the spina bifida program, ranging from 13 to 50 years of age; the majority of beneficiaries were adults aged 35 through 45. Spina bifida is a complex congenital disorder that affects multiple body systems. People with spina bifida experience a variety of health problems, including difficulty with lower body mobility, lack of bowel and bladder control, hydrocephalus (a condition in which fluid builds up in the brain), and learning disabilities. As a result, these individuals require care from providers in a variety of specialties, such as orthopedics, urology, neurosurgery, and psychiatry. Individuals with spina bifida face additional health concerns as they age, including a higher risk for obesity and obesity-related illnesses, depression, early osteoporosis, and pressure ulcers. Many of these health concerns are linked to the diminished mobility that comes with physical disability. In addition to physical disability, adults with spina bifida often face difficulty with executive function. Executive function is defined as a set of mental processes that helps connect past experience with present action, and is used to perform activities such as planning, organizing, strategizing, paying attention to and remembering details, and managing time and space. Difficulty with these mental processes can inhibit the independence of individuals with spina bifida, and their ability to manage their own care. Although studies have found that adults with spina bifida would benefit from access to coordinated care provided in multidisciplinary clinics because of their complex health needs, the availability of such care for adults is extremely limited. VA’s Veterans Benefits Administration (VBA) determines eligibility for spina bifida benefits, including both health care and other benefits. Once a beneficiary is deemed eligible for benefits, VBA assigns a disability rating using criteria outlined in regulation, which entitles beneficiaries to monthly monetary payments similar to those provided to disabled veterans, as well as vocational rehabilitation and education services provided through VBA. Regardless of disability rating, beneficiaries are then automatically enrolled for health care benefits in VHA’s spina bifida program, which is operated by VHA’s Chief Business Office for Purchased Care (CBOPC). VA is required to provide certain health care benefits—including home care, hospital care, nursing home care, outpatient care, preventive care, habilitative and rehabilitative care, case management, and respite care— for spina bifida beneficiaries. Regulations for the spina bifida program provide additional details on covered services and supplies and outline preauthorization requirements for certain services. VHA recently sought to clarify the scope of its authority to provide home care and custodial care services under the spina bifida program.General Counsel issued an opinion confirming that VHA is required to provide coverage for these services, as needed, in the beneficiary’s home or other place of residence (such as a residential group home or assisted- In June 2013, VA’s living facility). As of April 2014, VHA was in the process of drafting a proposed rule incorporating this clarification into regulations. VHA provides information and updates on covered health care services to enrolled spina bifida beneficiaries. However, VHA has conducted limited outreach with key stakeholder groups that have a relationship with potentially eligible individuals who are not already enrolled. VHA provides information on the available health care benefits to beneficiaries who are enrolled in its spina bifida program using three primary methods: (1) the initial mailing of information upon program enrollment, (2) the program website, and (3) contact with beneficiaries regarding updates to covered services. Initial contact by mail. After beneficiaries are enrolled in the spina bifida program, VHA’s CBOPC staff mail beneficiaries (1) a program identification card (similar to an insurance card from a and private insurer) that providers can use to bill VHA directly,(2) a copy of the program handbook, which contains information on services covered and services that are generally excluded from coverage, as well as contact information and the website address for the program. VHA officials told us they enroll, on average, about one or two new beneficiaries per month. Program website. The spina bifida program website includes links to the program handbook, the policy manual—which provides additional details on coverage and exclusions for specific services—and other program documents such as guidance on how to submit claims. VHA officials told us they consider the program website to be the primary means of outreach with beneficiaries. Phone or mail contact to provide updates on covered services. VHA has recently shared changes to its spina bifida program with beneficiaries through telephone calls, and VHA officials told us they plan to share updates made between handbook printings through mailings to beneficiaries. Specifically, beginning in November 2013, VHA’s call center placed phone calls to enrolled spina bifida beneficiaries to gauge their interest in obtaining case management services.March 2014, the call center successfully made contact with 579 beneficiaries (47 percent of those enrolled) and 129 of them expressed interest in obtaining case management services. According to VHA, as of In addition to asking about case management services, VHA officials told us that during these calls, they also provided clarification that the program covers home care and custodial care services. After the program regulations are updated to reflect the June 2013 opinion from VA’s General Counsel regarding coverage of these services, the program handbook will be updated and mailed out to enrolled beneficiaries, according to VHA officials. Officials told us that they also plan to follow up with a letter in May 2014 to provide clarification on the program’s coverage for home care and custodial care services, as well as case management. VHA has conducted limited outreach with key stakeholder organizations, and representatives of these organizations told us that this has contributed to lack of awareness among some individuals who may be eligible to receive health care benefits under the spina bifida program. VHA’s outreach with stakeholder organizations has been limited primarily to providing materials on the program to veteran service organizations for distribution at conferences. For example, VHA officials told us they provided materials for conferences held by four different veteran service organizations in the summer of 2013. However, VHA has not conducted outreach with VVA and SBA—the two key stakeholder organizations that we contacted—in recent years. Specifically, representatives from VVA— an organization that represents veterans who served in Vietnam and advocates on their behalf regarding a variety of issues, including health care—told us that VHA has not reached out to them regarding its spina bifida program, and has not provided them any materials regarding available health care benefits to distribute to their membership. VHA officials told us that they have met with VVA officials to explain updates to covered services, but stated that their outreach efforts with stakeholder organizations are driven by requests from the organizations, and they were not certain if VVA had requested further outreach. VHA’s most recent coordination with SBA—an organization with direct contact with individuals with spina bifida, their families, and the providers who treat them—was in 2009. VHA officials told us that they coordinated with SBA that year by staffing a resource education booth at SBA’s annual conference. VHA officials also told us that limits on travel spending have prevented them from attending or participating in the conference in recent years. Limited outreach with key stakeholder organizations has contributed to the lack of awareness about available health care benefits among some individuals who may be eligible to enroll in the spina bifida program, according to representatives from these organizations. Specifically, representatives from VVA told us that through VVA’s Agent Orange Education Campaign they identified new individuals who were potentially eligible for VHA’s spina bifida program and its benefits, but were not aware of them.regularly contacted by individuals who have questions about VHA’s spina bifida program benefits and do not know where to go for more information. Although SBA includes information on its website about VHA’s spina bifida program benefits, when we reviewed this page in April 2014, we found that it contained outdated information. In addition, representatives from SBA told us they are Federal internal control standards state that agency management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals.of the spina bifida program is to provide for the special needs of Vietnam and certain other veterans’ birth children who have been diagnosed with spina bifida. Key stakeholder organizations are well-positioned to provide information to these individuals on available benefits because of their established relationships with veterans and individuals with spina bifida and their health care providers. However, VHA has not leveraged these organizations’ relationships with potentially eligible individuals to further VHA’s goal of providing for the special needs of beneficiaries with spina bifida. According to VHA officials, the goal Representatives from both VVA and SBA stated that they would be willing to coordinate with VHA on efforts that could improve awareness and understanding of VHA’s spina bifida program. For example, VVA officials told us that they could help VHA promote the health care benefits available through the spina bifida program, including providing information on the benefits through their weekly emails to subscribers. VVA officials also suggested that outreach with providers or provider organizations would be beneficial because it would increase awareness of the connection between military service and certain health issues, such as spina bifida, among individuals who are the “first line” in interacting with patients. Similarly, according to an SBA representative, SBA could facilitate the connection between VHA and health care providers who serve individuals with spina bifida. These providers could help identify spina bifida patients who have a Vietnam-era veteran parent, and provide those individuals with information about VHA’s health care benefits for which they may be eligible. VHA uses an automated system, augmented by administrative and clinical reviews, to process spina bifida claims. The claims process begins when a provider or beneficiary submits an electronic or paper claim to VHA. Upon receipt, the claim’s information is to be entered into an automated claims processing system maintained by VHA’s CBOPC.necessary, staff members conduct administrative or clinical reviews before denial or payment decisions are made. VHA also has a process for reconsideration of denied claims. Automated claims processing. CBOPC’s automated claims processing system uses business rules to compare the information in the claim against the spina bifida program’s policies and regulations for eligibility and covered services. For example, VHA officials told us the system checks the date of service against the date the beneficiary was determined to be eligible for the spina bifida program. Administrative and clinical review. If the automated system cannot complete the processing of a claim because it detects an error or needs additional documentation or approval to continue with processing, VHA officials told us a CBOPC staff member conducts an administrative or clinical review. In an administrative review, a staff member examines a claim to ensure that required documentation (e.g., required preauthorization for mental health services) has been received. In a clinical review, the claim is reviewed by a clinical nurse reviewer to ensure that medical documentation included is sufficient for processing the claim.example, a clinical nurse reviewer could examine documentation included with a claim for specialized durable medical equipment to ensure it is sufficient to support the request, and request additional documentation if necessary. Payment or denial. Once the automated system has completed its checks, and any needed administrative or clinical reviews are completed, the claim is either routed for payment or it is denied. For a denied claim, the automated system or reviewer assigns a denial reason code to the claim, which provides a brief description of the cause of the denial (e.g., missing documentation or the service billed was not a covered service). This information is included in the explanation of benefits document mailed to the provider and beneficiary. Resubmission or request for reconsideration. A provider or beneficiary who disagrees with the amount of payment or the decision to deny the claim can resubmit the claim for reprocessing, or submit a request for reconsideration of the claim to CBOPC within 1 year of the original decision. Resubmission of claims for reprocessing is separate from the claims reconsideration process. A resubmitted claim must include the original explanation of benefits and any other relevant documentation or corrections for consideration, and resubmitted claims are sent through the claims processing system as new claims. VHA officials told us that requests for reconsideration are processed outside of the automated claims processing system, and are reviewed by CBOPC staff with consultation from clinical nurse reviewers as needed. A request for reconsideration that is subsequently denied can be submitted for a second review within 90 days. Each claim’s explanation of benefits document provides contact information for CBOPC and the mailing address for requests for reconsideration. VHA updates the business rules used by its automated claims processing system in accordance with changes to the applicable laws and regulations that govern coverage of specific services. VHA officials told us that these updates are not made until the rulemaking process is complete and applicable rules have been published in the Federal Register, a process that can take years. In the interim, officials said affected claims— such as those for home care services provided in an assisted-living facility, which VHA recently clarified as covered—would be processed manually to ensure they are not incorrectly denied by the automated system. From fiscal years 2009 through 2013, total payments for spina bifida claims increased by 43 percent—from about $19.4 million to about $27.8 million. (See table 1.) The number of beneficiaries who had claims paid increased by 10 percent, from 803 to 883, and the number of paid claims increased by 45 percent, from 58,560 to 84,702. VHA officials told us they attribute the growth in the spina bifida program to an increasing number of claims and payments in the years following the 2008 legislative expansion of health care coverage under the program, as well as increasing health care costs for beneficiaries as they age and their health care needs become more varied and complex. Officials told us that, in the future, they expect spending on health care services to continue to increase due to new services being offered, such as custodial care. However, they do not expect significant increases in the number of beneficiaries because there are few new children with spina bifida born to Vietnam-era veterans and currently proposed expansions of coverage to new eligibility groups are not likely to add significantly to program enrollment. In fiscal years 2009 through 2013, the percentages of paid and denied claims remained steady, with paid claims representing about 90 percent of all claims submitted each fiscal year. For example, out of 95,149 claims submitted in fiscal year 2013, 84,702 (89 percent) were paid. (See fig. 1.) About two-thirds of the total number of paid claims in each year from fiscal years 2009 through 2013 were for outpatient services, the largest of the six categories tracked by VHA. For example, in fiscal year 2013, 63 percent of the total number of paid claims were for outpatient services. (See fig. 2.) Outpatient services represented nearly 50 percent of total payments made that year. In contrast, inpatient claims represented less than 1 percent of the total number of paid claims in fiscal year 2013, but 31 percent of total claims payments that fiscal year. In our analysis of VHA’s outpatient procedure code data, we determined that home care services, physician visits, physical therapy services, and catheters and other incontinence supplies for spina bifida beneficiaries were commonly reimbursed outpatient services in fiscal year 2013. Studies we reviewed and experts we interviewed confirmed that these services and supplies were consistent with the health care needs of adults with spina bifida. Home care services. Although the specific health care needs of adults with spina bifida can vary widely based on the severity of their condition, officials from SBA told us that home-based services are important because of the challenges these individuals face with executive functioning and limited mobility. Since many adults with spina bifida rely on wheelchairs and have varying degrees of mobility, traveling to medical appointments can be challenging. Physician visits. Due to the various health problems associated spina bifida (including musculoskeletal, neurological, and urological health care needs), medical care for adults with spina bifida involves visits to numerous physicians—both for primary and specialty care. Physical therapy services. A study noted that adults with spina bifida commonly report chronic pain as a result of the body mechanics involved in wheelchair propulsion. Another study noted that physical disability, and the reduced physical activity that results from it, is a risk factor for early onset osteoporosis in adults with spina bifida. Physical therapy services can help alleviate pain and maintain physical functioning for adults with spina bifida. Catheters and other incontinence supplies. According to SBA officials, a common health care issue for individuals with spina bifida is neurogenic bladder and bowel, in which the nerves in this area of the body do not function properly, leading to ongoing issues with incontinence and urinary tract infections, and potentially renal failure in older populations. Studies we reviewed, as well as SBA’s spina bifida treatment guidelines, noted the need for continence management programs for individuals with spina bifida, including daily intermittent catheterization to improve renal outcomes. VHA officials told us that a claim is considered denied if all procedure codes included in the claim are denied; if some procedure codes are paid and some are denied, the claim is considered a paid claim. Therefore, the number of denied claims does not reflect all procedure codes that were denied in a given year. service or hospital discharge), or the need for additional documentation.Specifically, in fiscal year 2013, about 90 percent of denied claims were denied for administrative reasons. Few claims were denied because the service was not covered or the beneficiary was ineligible for coverage (less than 3 percent of all denied claims in fiscal year 2013). For example, one denied claim we reviewed was for an eye exam—eye exams and glasses are excluded from coverage per spina bifida program policy. Another denied claim we reviewed was for durable medical equipment. A beneficiary requested payment for a device that, via remote control, could automatically open a door in the home. This claim was denied because VHA does not provide payment for durable medical equipment that is used for housing modification. Few denied claims were submitted for reconsideration. Of the 136 requests for reconsideration in fiscal years 2009 through 2013, 50 (37 percent) were subsequently paid. Specifically, in fiscal year 2013, there were 35 requests for reconsideration, 15 of which were subsequently paid. VHA conducts annual audits of spina bifida program claims, and VHA officials told us these audits and associated audit follow-up activities are Auditors from the the primary means of oversight for the claims process.VHA CBOPC’s Department of Audits & Internal Controls conduct audits of the spina bifida claims process annually. During the audit, auditors examine a statistically valid sample of paid claims from the previous quarter. The purpose of these audits is to identify whether claims were processed and paid accurately according to spina bifida program policy. VHA officials told us these audits involve auditors retracing all the steps in the claims approval and payment process to determine whether all claims-related decisions were correct. For example, auditors may review comments from clinical nurse reviewers regarding preauthorization determinations and re-run automated decision-making associated with selected claims through a testing program. Any inaccurate processing identified by the auditors that results in an improper payment (an over- or under-payment on a claim) is recorded as an error and assigned to a category, such as duplicate payments, lack of supporting documentation, or non-compliance with policies and procedures. For example, the fiscal year 2014 audit (of the fourth quarter of fiscal year 2013 claims) had a claims processing accuracy rate—the percentage of claims processed correctly—of 98.4 percent and a proper payment accuracy rate—the percentage of total payments made correctly—of 99.9 percent. Specifically, the audit identified three claims with errors—two claims that should have been denied as duplicates but were not (resulting in overpayments), and one claim where a data entry error resulted in an incorrect payment amount. After the audit is complete, auditors meet with relevant staff to determine the underlying causes of improper payments and other inaccuracies. An audit representative also presents the audit’s findings to an audit review committee, which includes senior CBOPC officials who discuss the root causes of findings and decide on any necessary follow-up activities. Auditors then complete a report that documents the audit’s findings, including corrective actions and recommendations. The audit report’s corrective actions directly address the errors found in the audit. For example, VHA collects overpayments or pays providers for underpayments. Audit report recommendations suggest additional actions such as training or additional resources that should be made available to increase processing accuracy moving forward. There is currently no written guidance on how CBOPC staff are to document the status of audit follow-up activities—corrective actions and recommendations outlined in audit reports—to ensure their completion. VHA officials told us that, beginning with the fiscal year 2014 audit, staff from CBOPC’s Quality/Corrective Action Program are responsible for overseeing the status of audit follow-up activities, including working with relevant staff responsible for implementing any corrective actions. According to VHA officials, these staff are responsible for determining the extent of documentation necessary for audit follow-up activities. Officials also stated that these staff store any documentation in non-networked files. This can render audit follow-up documentation inaccessible to other VHA officials who may need it. Further, although staff maintain information on the status of actions taken to implement audit findings and the individuals responsible for implementing them, they do not maintain information on estimated or actual completion dates for audit follow-up activities. There also is no documentation of interactions with staff or interim steps taken to ensure that follow-up activities are completed as planned. For example, officials told us that for one of the identified actions, there would be monthly follow-ups until the action is complete; however, there is no documentation to indicate that this interim follow-up is taking place. According to VHA officials, one reason for the lack of written guidance on completing and documenting audit follow-up activities is that their audit follow-up process is new; they anticipate having written guidance drafted by August 2014. Federal internal control standards state that internal controls should be documented and all documentation should be properly managed and These standards also maintained, and readily available for examination.state that agencies should have policies and procedures for ensuring that the findings of audits and other reviews are promptly resolved. VHA’s lack of written guidance for audit follow-up activities places VHA at increased risk that these internal control activities may not be performed, may be performed inconsistently, or may not be continued when knowledgeable employees leave. This can lead to unreliable monitoring of the spina bifida claims process, including the inability to ensure that all necessary audit follow-up activities are completed. The legislation that created VA’s spina bifida benefits charged VHA with serving the needs of a very vulnerable population. Given the lack of data on the prevalence of spina bifida in the children of veterans, and concerns from stakeholder organizations that potentially eligible individuals may not be aware of available benefits under the spina bifida program, stakeholder organizations are uniquely positioned to assist VHA in communicating information on spina bifida benefits. By not conducting outreach with key stakeholders, VHA may be missing important opportunities to increase awareness among potentially eligible individuals, and ultimately to help these individuals obtain the benefits to which they may be entitled. In addition, without written guidance for audit follow-up activities related to the spina bifida claims process, VHA cannot be assured that these activities have been successfully completed or that the corrective actions and recommendations outlined in audit reports have been appropriately implemented. The lack of written guidance puts VHA at increased risk that these activities may be inconsistently performed—or not performed at all—if there are personnel changes. By developing written guidance for documenting these activities in a manner consistent with federal internal control standards, VHA would have greater assurance that audit follow-up activities are consistently completed, thereby helping to ensure that spina bifida beneficiaries’ health care claims continue to be accurately processed. To improve awareness of the spina bifida program’s health care benefits among potentially eligible individuals and to help them obtain the benefits to which they may be entitled, we recommend that the Acting Secretary of Veterans Affairs direct the Acting Under Secretary for Health to conduct outreach with key stakeholder groups regarding the program and its benefits. To help ensure continued accurate claims processing, we recommend that the Acting Secretary of Veterans Affairs direct the Acting Under Secretary for Health to develop written guidance, consistent with federal internal control standards, for completing and documenting the status of follow-up activities for the spina bifida program’s claims audits. We provided a draft of this report to VA for comment. In its written comments, reproduced in appendix I, VA generally agreed with our conclusions and concurred with our recommendations. In addition, VA provided information on its plans for implementing each recommendation, with an estimated completion date of December 2014. We are sending copies of this report to appropriate congressional committees, the Acting Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Janina Austin, Assistant Director; Jennie F. Apter; George Bogart; Vikki L. Porter; Julie T. Stewart; and Malissa G. Winograd made key contributions to this report.
|
VA provides health care benefits to children diagnosed with spina bifida—a birth defect that can cause physical and neurological issues—born to Vietnam and certain other veterans. Legislation requires the provision of certain health care benefits—including home care, hospital care, outpatient care, and case management—for spina bifida beneficiaries. VHA administers the Spina Bifida Health Care Program for enrolled beneficiaries by processing and paying claims for covered services from private sector providers. GAO was asked to evaluate VHA's administration of spina bifida health care benefits. In this report, GAO examined for the spina bifida program: (1) the extent to which VHA conducts outreach about available benefits, (2) what is known about health care claims that have been processed, and (3) what, if any, oversight, VHA conducts of the claims process. GAO reviewed the spina bifida program handbook and claims audit reports, analyzed data on submitted, paid, and denied claims from fiscal years 2009 through 2013, and interviewed VHA officials and representatives from key stakeholder organizations. The Department of Veterans Affairs' (VA) Veterans Health Administration (VHA) provides information and updates on covered health care services to beneficiaries enrolled in its spina bifida program, but has conducted limited outreach with key stakeholder organizations. VHA provides information on health care benefits to enrolled beneficiaries through the program website, for example. However, VHA has conducted limited outreach with key stakeholder organizations—such as the Spina Bifida Association—that have relationships with individuals who are potentially eligible for the spina bifida program and its benefits but are not enrolled. Representatives of these organizations told GAO this has contributed to lack of awareness of eligibility and available benefits. Without this outreach, VHA may miss important opportunities to help potentially eligible individuals obtain health care benefits to which they may be entitled. For the spina bifida program, both total claims payments, as well as the total number of claims paid, increased by more than 40 percent from fiscal year 2009 through fiscal year 2013. VHA officials attributed the increase to a 2008 legislative expansion of health care coverage under the program, and growing health care costs for beneficiaries as they age. During this 5-year period, paid claims represented about 90 percent of all claims submitted each fiscal year. VHA primarily uses claims audits to oversee its spina bifida claims process. Auditors review a sample of claims and prepare a report with the audit's findings and any necessary follow-up activities. However, VHA does not have written guidance on how staff are to document the status of these follow-up activities to ensure their completion. Without such written guidance, VHA cannot be assured that these activities have been successfully completed or that any recommendations outlined in audit reports have been appropriately implemented. GAO recommends that VA conduct outreach with key stakeholder groups regarding the spina bifida program and its benefits, and develop written guidance for completing and documenting the status of follow-up activities related to claims audits. VA concurred with GAO's recommendations.
|
On April 1, 2004, the President approved GPOI, a 5-year program to help address significant gaps in international peace operations, including a shortage of capable peacekeepers, limited national capabilities to train and sustain peacekeeping proficiencies, and a lack of mechanisms to help countries deploy peacekeepers and provide logistics support for them in the field. To support the development of peacekeeping capabilities of GPOI countries, the program incorporates and expands on the pre-existing Africa Contingency Operations Training and Assistance (ACOTA) program and the Enhanced International Peacekeeping Capabilities (EIPC) program. In 2004, the United States established GPOI as a $660 million, 5-year program with seven objectives to increase and maintain the capacity, capability, and effectiveness of peace operations worldwide. These objectives are to train and, when appropriate, equip 75,000 military peacekeepers by 2010; support efforts at the International Center of Excellence for Stability Police (COESPU) in Italy to increase the capabilities and interoperability of stability police to participate in peace operations; develop a program to procure and store peace operations equipment to facilitate the equipment’s quick mobilization for peace operations; develop a transportation and logistics support system to deploy and sustain peacekeeping in the field; enhance the capacity of regional and subregional organizations for provide a worldwide clearinghouse function for GPOI-related activities in Africa and globally; and conduct activities that support and assist partners in achieving self- sufficiency and maintaining the proficiencies gained from GPOI. State’s Bureau of Political-Military Affairs, in coordination with DOD’s Office of the Secretary of Defense and the Joint Staff, is responsible for providing policy guidance; allocating resources; and coordinating GPOI programs, events, and activities. All GPOI allocations and program activities must be approved by the GPOI Coordination Committee (GCC), the formal decision-making body co-chaired by State’s Bureau of Political- Military Affairs and the Office of the Secretary of Defense. Participants of the GCC include the Joint Staff and, as required, other program implementers. GPOI implementers include the U.S. Combatant Commands, State’s regional bureaus, the Office of the Secretary of Defense’s regional offices, and U.S. diplomatic posts. The regional combatant commands are the lead implementers of GPOI activities throughout the world, with the exception of Africa, where State’s Bureau of African Affairs leads implementation of GPOI activities. Within the African Affairs Bureau, ACOTA is the lead implementer for the training and equipment portion of GPOI activities in Africa. State has designated 52 countries as partner countries eligible to receive funding for GPOI activities—38 for military peacekeepers, 3 for stability police, and 11 for both military peacekeepers and stability police, as of April 2008. As figure 1 shows, the majority are located in Africa (22 countries) and the remaining are in Asia, South and Central America, Europe, and the Near East and Central Asia. (See app. II for a list of all GPOI partners.) State has allocated $374 million, from fiscal year 2005 through fiscal year 2008, for GPOI activities worldwide, of which it has expended about $152 million for activities in four major categories: training, training equipment, deployment assistance, and skills and infrastructure. As displayed in figure 2, the majority—about $98 million—has been spent in Africa, followed by about $30 million in Asia and $12 million in South and Central America. In Africa, the majority has been spent on training and training equipment together followed by deployment assistance of equipment and transportation for deployed peacekeeping missions. In Asia, the majority has been spent on skills and infrastructure followed by training. In South and Central America, the majority has been spent on training equipment followed by activities for building skills and infrastructure. (App. II identifies the GPOI partner countries in these geographic regions.) Training of military peacekeepers under GPOI can be provided by contractors, U.S. military active duty personnel, or by trainers from neighboring countries in the region, and is focused on providing battalion- level training for peacekeeping missions. U.S. contractors provide the majority of training in Africa and, when available, U.S. military active duty personnel serve as mentors to African trainees. In Asia, U.S. military personnel provide the majority of training but use contractors to provide some of the training for military officers. In Central America, training is provided by other countries and by U.S. military personnel. The United States has funded the training of a few individuals in the Near East and Europe. U.S. military personnel may serve as mentors to trainees in these regions. Training has not yet occurred in Central Asia. GPOI training of stability police is provided at COESPU—Italy’s international training center for peace operations located in Vicenza, Italy, where the Italian Carabinieri train instructors of stability police units. State and DOD have made some progress in achieving GPOI goals in three principal areas: training and equipping peacekeepers, providing equipment and transportation for deployed missions, and building peacekeeping skills and infrastructure, but challenges remain in meeting these goals. Table 1 summarizes the status of GPOI activities for the three principal goals and seven objectives. First, State and DOD have trained about 40,000 military peacekeepers, predominantly in Africa, and supported the training of over 1,300 stability police, but it is unlikely that GPOI will meet its goal of training 75,000 military peacekeepers by 2010 due to the time it takes to expend program funds, and State and DOD have encountered delays in delivering nonlethal training equipment. Second, State has provided equipment to deployed missions in Lebanon, Somalia, Sudan, and Haiti; supports an equipment depot in Sierra Leone; and initiated a process for peacekeeping countries to request donor assistance for their transportation and logistics needs, but some efforts have been delayed. Third, State and DOD have trained more than 2,700 military peacekeeping instructors and conducted other activities. However, State faces delays in completing activities to build skills and infrastructure in Africa by 2010. In addition, State has targeted a smaller share of resources to build African peacekeeping skills and infrastructure than to train and equip African peacekeepers, compared to other regions. This is due in part to the needs and capabilities of the region and a focus on training peacekeepers in this region for current missions. The following sections provide more information about the progress made in these areas. The majority—92 percent—of military peacekeepers trained under GPOI are from African partner countries, while the remainder have been trained in Asia, Central America, and Europe. In addition, State has supported the training of over 1,300 stability police instructors at COESPU, providing about one-quarter of the school’s budget. However, State is not likely to train 75,000 military peacekeepers by 2010 and has not provided support for all requested staff positions at COESPU. Further, State has provided about $31 million of training equipment to military peacekeepers in 27 countries, predominantly in Africa. However, State has faced challenges in delivering training equipment to GPOI partner countries in a timely manner and accounting for equipment delivery. State and DOD have trained about 40,000 military peacekeepers as of April 2008—36,968 in Africa; 1,805 in Asia; 455 in Central and South America; and 289 in Europe (see app. IV for details on the number trained by region and country). State is not likely to complete the training of 75,000 military peacekeepers by the target date of 2010. As figure 3 shows, the actual number of troops trained is lower than State’s projections. State expects to reach its goal once it has spent all GPOI training funds, but this will likely not occur until after 2010 due to the time it takes to expend training funds. In commenting on a draft of this report, State asserts that it now expects that GPOI will train 75,000 peacekeepers by July 2010 based on new training rates. We were unable to validate State’s new projections since as recently as May 2008, program officials from the GPOI office in the Bureau of Political-Military Affairs and its GPOI evaluation team indicated that slow expenditure rates related to training rates would delay their efforts to reach the 2010 training goal. State has spent approximately $56 million to train military peacekeepers, as of April 2008. Figure 4 shows the expenditures of GPOI funds for training military peacekeepers by region. The majority of the funds, about $39 million, have been spent in Africa. In addition to these funds, some of the combatant commands have spent additional DOD funds to support the State-funded GPOI training. For example, U.S. Pacific Command officials identified that they spent an estimated $8 million of additional DOD funds to develop courses for peacekeeping training and support multinational training exercises held in Mongolia and Bangladesh. Based on current projections, COESPU has indicated that it is likely to meet its goal of training 3,000 stability police instructors by 2010. As of April 2008, State had expended $9 million of $15 million obligated for COESPU’s operations, directly supporting about one-quarter of COESPU’s budget. In 2005, the Italians requested assistance from the United States in filling six staff positions at COESPU in the areas of management, training, research, and publications. Since 2005, the United States has provided a military officer to serve in the deputy director position, but support has not been provided for the other requested staff positions at COESPU. According to a February 2008 State document and COESPU and U.S. officials we met with in Italy, the United States planned to provide support to fill a total of five staff positions at COESPU: deputy director, head of the training department for high-level courses, manager of research for stability police training doctrine, evaluator of course outcomes, and Web site and magazine manager. In January 2008, COESPU and U.S. officials we met with in Italy stated that these positions would help COESPU track the activities of its graduates, dispatch mobile training teams, and expand the number of students in each class. In May 2008, State officials in Washington, D.C., indicated that they plan to fund the position for an evaluator of course outcomes in the near future. In addition, we found that State does not always use staff at U.S. missions in partner countries to facilitate U.S. support to COESPU. For example, an embassy official in Senegal stated that when COESPU sent a questionnaire to Senegalese officials inquiring about deployments and training activities of COESPU graduates, State did not instruct the embassy to follow up and help obtain a response. State has provided about $31 million in nonlethal training equipment to military peacekeepers in 27 countries, predominantly in Africa. The equipment provided includes individual and unit equipment for military units training for peacekeeping missions, as well as equipment for COESPU to train stability police instructors. State has encountered delays in the purchase and delivery of this equipment, often resulting in State’s inability to provide equipment concurrently with training sessions. Further, State officials have been unable to fully account for training equipment delivered in Africa. The equipment provided includes individual equipment such as boots, first aid kits, and uniforms; and unit equipment such as radios, tents, and toolkits. (See app. V for more information on the types of training equipment provided in each region.) As figure 5 shows, the majority of the equipment was provided to partner countries in Africa. State also has provided individual training equipment directly to COESPU for students attending the school. This equipment included nonlethal items such as riot batons and shields. In addition, officials from some of the combatant commands stated that they use other sources of funds to provide additional equipment to military peacekeepers. For example, U.S. Central Command officials identified an estimated $14 million in funds from DOD accounts to provide items such as body armor, water purification units, vehicles, and uniform equipment for a peacekeeping brigade in Kazakhstan in fiscal years 2006 and 2007. State and DOD have encountered problems in providing training equipment to partner countries in a timely manner. The procurement of equipment through the Defense Security Cooperation Agency, which is responsible for a large amount of equipment for GPOI, has encountered delays due to the procurement priorities for U.S. military forces, the time needed to identify the specific equipment needs for each country, and manufacturing backlogs. For example, a 2007 State program evaluation found that only two of several hundred training equipment items procured through the Defense Security Cooperation Agency for Central America with fiscal year 2005 funds had arrived in country by the end of 2007, and the delivery dates for the remaining equipment were unknown. Contractors and agency and host country officials in the countries we visited in Africa stated that training equipment often is not concurrently provided with GPOI training, due to the delays in procurement and delivery. In addition, U.S. officials in Guatemala stated they had to delay training when equipment was not delivered in time. State also has encountered problems in accounting for the delivery and transfer of equipment to partner countries. Specifically, State officials in Washington, D.C., have been unable to fully account for training equipment delivered to Africa. State has used a contractor to purchase in total approximately $19 million of equipment for African partner countries but, as of December 2007, could not account for the equipment’s delivery. State officials responsible for implementing the program in Africa said that they instituted a new system in mid-2007 to account for the equipment delivered to partner countries. These officials said that the difficulties with accounting for equipment deliveries have been due to the fact that the previous system was poorly organized. In June 2008, these officials stated they had completed an inventory identifying the equipment items ordered and delivered using GPOI funds and were now able to fully account for the entire inventory of equipment purchase. State has provided equipment to deployed missions and recently established a system to facilitate donor assistance for transport and logistic support to peacekeeping countries deploying to missions. However, State has encountered delays in delivering equipment to missions, similar to the delays in delivering equipment for training. State has provided equipment to deployed missions in a number of ways. As figure 6 shows, the majority of this support has been provided to Africa. In Sierra Leone, since 2005, State has spent over $9 million in equipment and operational support, for an equipment depot used for peacekeeping missions and election support by the Economic Community for West African States (ECOWAS). As of April 2008, State also had provided $18 million of nonlethal equipment for six countries deploying to missions in Haiti, Lebanon, Somalia, and Sudan. For example, State provided field kitchens, field medical clinics, water purification units, and generators to peacekeepers deploying to Somalia. This equipment helped support the deployment of at least 4,600 peacekeepers, according to State. Although State’s goal is to provide equipment to countries deployed to peacekeeping missions in a timely manner, as of April 2008, $9 million of equipment obligated since 2005 for countries deployed to missions in Somalia and Sudan had not been provided by State. For example, State obligated $9 million in fiscal year 2005 to support Nigeria, Kenya, and the African Union in the peacekeeping mission to Sudan, but this equipment was not provided until 2007, according to State reporting, and $3.6 million remains to be expended. In another example, State documents indicate that $5.6 million in fiscal year 2006 funds obligated for the purchase of equipment to support peacekeepers deployed from Rwanda, Ghana, Burundi, and Nigeria have not yet been expended. To facilitate donor support for transportation and logistical needs of countries deploying peacekeepers, State established an electronic communication system in the fall of 2007. Requests made by countries seeking assistance with transportation and equipment for peacekeeping missions can be sent by e-mail to G8 and other countries that could provide such assistance. As of April 2008, five potential donor G8 countries have designated a contact person to receive such requests, according to State. Although the GPOI strategy committed to initiating the process and establishing an electronic system by 2006, State did not establish the system until 2007. In April 2008, the first request for assistance for one country’s deployment to the African Union mission in Somalia was communicated by State to donors through the system, according to State. State and DOD have conducted a number of activities to enhance peacekeeping skills and infrastructure to develop the ability of countries to conduct training for their own peacekeeping missions and to improve the capabilities of regional organizations to plan, train for, and execute peacekeeping missions. Although African partners receive the majority of GPOI funds, State has targeted a smaller share of resources, comparatively, for activities to build peacekeeping skills and infrastructure among Africa peacekeepers, in part due to the needs and capabilities of the region and its focus on training and equipping peacekeepers to serve in current missions. State and DOD have conducted a range of activities to build peacekeeping skills and infrastructure among partner countries. (See app. VI for information on the status of these activities in each region.) These activities include the following: Enhancing the ability of countries to conduct their own peacekeeping trained 2,384 military peacekeeping instructors in African countries, 266 in Asian countries, 43 in Central American countries, and 26 in European countries; refurbished training centers in Guatemala, Indonesia, Jordan, Mongolia, and Ukraine; and supported three annual multinational training exercises in Asia beginning in 2006, enabling peacekeeping units from different countries to train together. Improving the capabilities of regional organizations to plan, train for, and execute peacekeeping missions. These activities include the following: trained ECOWAS staff on mission planning and management; provided equipment and supported operations for the ECOWAS provided computer equipment to regional peacekeeping training centers in Ghana, Kenya, and Mali.; and funded training of units from El Salvador, Guatemala, Honduras, and Nicaragua, which will serve as a multinational brigade under the Conference of Central American Armed Forces. State has spent approximately $32 million in building skills and infrastructure in different regions of the world. As displayed in figure 7, State has spent more in Asia than Africa on activities that build skills and infrastructure—about $15 million in Asia and $12 million in Africa— although Africa receives the majority of GPOI funds overall. Further, State has targeted a higher proportion of funds in South and Central America, Asia, and Europe on building skills and infrastructure than on training and equipping peacekeepers within those regions. In support of its goal to build skills and infrastructure, State has spent 51 percent of all funds for Asia in this area, and about 20 percent of all funds for both South and Central America and Europe. In comparison, of the $98 million spent in Africa, 12 percent was spent on assisting with peacekeeping skills and infrastructure. In response to our findings, State officials attributed the limited focus of resources for building skills and infrastructure in Africa to a drop in funding of more than 20 percent from the funding initially anticipated in 2005. These officials told us that the program objectives were developed with the expectation of receiving $660 million and the decrease in funding to approximately $500 million over 5 years has influenced program decisions and priorities. In May 2008, State and DOD officials said that discussions are underway to develop proposals for future GPOI activities after 2010. In addition to the funds expended by State, some of the combatant commands have used DOD funds to support GPOI activities in Asia, Africa, and Europe. For instance, U.S. Pacific Command officials identified that they spent about $500,000 in fiscal year 2007 to supplement the refurbishment of buildings at Mongolia’s peacekeeping training center. In addition, some of the commands assign officers to serve as liaisons at peacekeeping training centers in other countries. For instance, U.S. African Command has a liaison officer at a peacekeeping training center in Ghana, and U.S. European Command has an officer at a peacekeeping training center in Bosnia. Activities to build skills and infrastructure in Africa have faced delays and will likely not be completed by 2010. Specifically, State faces delays in building African countries’ ability to maintain their training programs, establishing a regional communication system for ECOWAS and the African Union, and transferring the equipment depot to ECOWAS. According to State officials, these delays are affected, in part, by African peacekeeping countries’ limited resources and capabilities for supporting their own peacekeeping programs. State officials also have noted that the ability these countries have to support their peacekeeping program is directly affected by the rates of peacekeeper deployments these countries provide to peacekeeping missions. In two of the African countries we visited, high rates of deployments of trained instructors limit their ability to build and maintain a training program. For example, in Senegal, officials stated that building a cadre of Senegalese instructors was difficult because once these instructors complete GPOI training, they are frequently deployed on missions due to their high skill levels. The strategic communications system that State established for ECOWAS member countries is not fully operational. State documents identified that, while some countries were using the equipment, others had yet to either receive or use it. In commenting on a draft of this report, State told us that 11 countries have equipment and 2 are waiting on equipment delivery. State also obligated $4.5 million to set up a strategic communications system for the African Union but has been unable to install the system due to a licensing issue, according to State. The ECOWAS equipment depot in Sierra Leone is likely to continue to function under joint control of the United States and ECOWAS. State maintains the depot, including the delivery and maintenance of nonlethal equipment used by ECOWAS members for peacekeeping and election support. State intends to transfer full responsibility to ECOWAS for the maintenance of the depot, according to State officials, but this is unlikely to happen in the near term. State officials said that ECOWAS is not fully capable of financing the depot in the near future and will require U.S. support in the near term for its operations and maintenance. State and DOD provide training on a number of military peacekeeping skills, and 56 percent of these trained military peacekeepers from 13 countries have deployed to peacekeeping missions, as of April 2008. However, State faces challenges in assessing the quality and effectiveness of its training program. First, State cannot ascertain the proficiency of the peacekeepers it has trained against a standard level of skills taught during their training to determine if it is providing effective training. Second, State officials are unclear about their responsibilities for maintaining and recording evaluations of performance evaluations in the contractor performance system of contractors who provide training in Africa. Third, State is unable to fully account for the activities of trained instructors to measure the program’s impact in building countries’ capability to continue this training. Specifically, as of April 2008, State had trained more than 2,700 military instructors and supported the training of over 1,300 stability police instructors at COESPU, but could not identify whether these instructors subsequently conduct training. State and DOD train military units in peacekeeping skills, primarily to aid participating countries in their deployment to peacekeeping operations. According to GPOI strategy and agency officials, the instruction is based on standard tasks identified in U.S. military training doctrine and UN training materials and is modified by the partner country’s or region’s needs, the skill levels of the soldiers in the unit, and the specific requirements of the peacekeeping mission. However, State does not have program-wide standards in place to measure the proficiency of trainees, the quality of instruction they receive, the performance of deployed trainees, or the activities of the trained military peacekeeping instructors. Further, State supports the Italian government, specifically the Italian Carabinieri, in providing training to stability police instructors for unit- level police operations on peacekeeping missions. However, State has no measures in place to identify the training provided by or the deployments of trained stability police instructors. State and DOD have provided training to military peacekeeping units in 43 of the 52 countries, according to State documents. According to State data, 56 percent of about 40,000 trained military peacekeepers from 13 countries have deployed to peacekeeping missions, the majority—97 percent—from African countries. Training is focused on providing peacekeeping skills to military units to assist preparation for deployment to a specific peacekeeping mission and is intended to supplement training already provided by the partner country. According to GPOI strategy and agency officials, GPOI implementers use relevant U.S. military doctrine to develop training instruction for military tasks. As displayed in table 2, training for these military units includes categories such as tactical skills for peacekeeping, medical care issues, and interaction with civilian groups and organizations, which contain a variety of peacekeeping tasks. For example, DOD and State provide instruction on tactical peacekeeping tasks such as how to escort a convoy, conduct checkpoint operations, or guard fixed sites. In addition, training of military peacekeepers in Africa may include instruction on firearms safety and marksmanship when training in such skills is identified as a need of that unit or country’s military peacekeepers. Military peacekeeper training also includes standardized training identified by the United Nations, such as basic information about the United Nations, UN structure and capabilities, issues regulating the behavior of the individual peacekeeper, standard operations procedures, logistics, medical support, and human rights. Military officers also are provided training in planning and managing battalion functions during peacekeeping operations. For example, officers are introduced to skills needed to plan and execute the protection of a fixed site, such as a food distribution site or protecting a convoy. In Africa, State provides more detailed training in military staff skills than in Asia, in response to the level of capabilities and needs of the peacekeeping units. For example, training of peacekeeping military officers in Africa includes instruction on the basic roles and responsibilities of officers staffed to a battalion. While there are some consistencies across the regions in the curriculum available, military peacekeepers do not receive the same training in all regions. Regional implementers have developed a training curriculum that is generally based on tasks identified in U.S. military doctrine and UN training materials, which are modified to address the specific needs or desires of the region or country. Identified training instruction is further modified or adapted for each training session to meet the identified needs of the partner country, skill levels of the individuals in the unit to be trained, and the requirements of the specific peacekeeping mission, according to training officials and State and DOD program implementers. COESPU has trained stability police instructors from 13 countries, providing training at two levels—senior- and junior-level officers or their civilian equivalents. Training for junior-level instructors is focused on the leaders of a stability police unit, while senior-level training is focused on the overall leadership of stability police operations. Courses in both levels include instruction on peace support operations, tactics, stability police operations, humanitarian law, international law, territorial awareness, and first aid. The Italian government developed the COESPU curriculum to provide general instruction for unit-level police operations on peacekeeping missions rather than tailoring the curriculum to specific missions. The 5-week senior-level course instructs course participants on the management of stability police operations as well as tactical instruction on shooting and driving. The 7-week junior-level course includes tactical courses on crowd control, urban area patrolling, high-risk arrests, VIP security, fire fighting, shooting, driving, and personal defense. The junior-level course also contains a simulation where course participants practice their skills in the training area. State does not have an established process for measuring the proficiency of trainees who receive similar types of training. GPOI trainers conduct training exercises and use after-action reviews and their professional judgment to determine students’ ability to perform tasks as a unit during a training course. However, State and DOD do not evaluate the military peacekeeper trainees against a program-wide standard level of proficiency in the skills taught during their training. For example, the evaluation process to assess a unit’s proficiency in operating a checkpoint depends on the instructor’s judgment, and the information is not collected in a way that can be compared against other trained units. Rather, a participant is considered a GPOI-trained peacekeeper if he or she attends 80 percent of the training GPOI provides. In commenting on a draft of this report, State noted that an individual participant is considered a GPOI-trained peacekeeper if his or her unit masters 80 percent of the training GPOI provides. However, according to the GPOI strategy and reporting provided by the GPOI evaluation team, implementers and trainers collect information that identifies individuals that participated in at least 80 percent of the training curriculum. Furthermore, the GPOI strategy states that the number of individuals who participate in unit training may be counted toward the goal of 75,000 if individuals are present for 80 percent or more of the unit training. In addition, implementers we met with told us that participants are counted as trained if they participate in at least 80 percent of the training curriculum. State provided one example in which 50 students from one country participating in two training courses were not counted as GPOI- trained because it was determined that the personnel were not sufficiently trained due to poor English language ability. Training and program officials in the countries we visited stated that, although they are not required to test students, they use their professional judgment as former or current U.S. military personnel to monitor students’ performance and determine if more time should be spent in developing certain skills, when possible. According to training documents, after receiving instruction in tactical peacekeeping tasks, trainees perform the task as a unit, and the instructors are to observe their performance and determine how the unit is performing against a standard checklist of items. For example, during an exercise for securing a distribution site, instructors will observe the training to judge if the unit follows proper procedures to control a crowd, set up checkpoints and observation points for the distribution area, and report incident information. Trainers in Ethiopia, Ghana, and Senegal stated that the intent of the training is to expose students to the tasks they need for peacekeeping, although they are not expected to achieve a specific level of proficiency in the skills taught. Military troops from Ghana and Senegal account for 44 percent of the deployed GPOI-trained troops. In addition, State officials told us that although instructors follow training standards, the evaluation process of training is subjective and a unit’s performance is affected by the skills and capabilities the soldiers bring to the training. The 2006 GPOI strategy states that GPOI program management personnel were in the process of developing military task lists and related trainings standards to contribute to standardization, interoperability, and sustainability, and ensure the proper use of resources. The strategy also states that developing such standards would help efforts to evaluate the overall effectiveness of the GPOI training program, events, and activities. However, during the course of our review State officials were unable to provide program-wide standards against which they could collect assessments to identify and evaluate the overall proficiency in comparable peacekeeping skills provided by GPOI to trainees worldwide. In commenting on a draft of this report, State stated that the program currently does not have standard military task lists and associated training standards to specify tasks, conditions, and standards for different types of military units participating in peacekeeping operations but that steps are being taken to develop training standards and military task lists that would be used as a basis to develop training plans and assess trainees. Such an evaluation would provide a measure with which to evaluate data that may be collected to identify the quality of the military peacekeepers GPOI has trained. Another measure of trainees’ performance is how a unit performs during a peacekeeping mission. However, State and DOD are unable to collect assessments of peacekeepers’ performance during a mission. GPOI trainers in Senegal, Ghana, and Ethiopia said they occasionally receive UN after-action reports that provide feedback on the performance of military peacekeepers trained by GPOI. However, State and DOD do not routinely collect or analyze these reports or independently assess how GPOI-trained troops performed. Without consistent reporting on the performance of the deployed units, State is unable to compare the performance of units trained within a country or region or between regions to identify similarities in the proficiency of military peacekeepers trained by GPOI. State has some procedures in place to monitor whether contractors are meeting cost, schedule, and performance requirements in training peacekeepers and providing advisor support. Specifically, State has assigned personnel in its Bureau of African Affairs to monitor the performance of contractors providing advisor support in Africa, established a program management team to oversee the activities of contractors providing training in Africa, and developed a plan to regularly monitor contractor performance. In addition, State receives regular status reports from the contractors. Quality assurance, especially regular surveillance and documentation of results, is essential to determine whether goods or services provided by the contractor satisfy the contract requirements. Surveillance includes oversight of a contractor’s work to provide assurance that the contractor is providing timely and quality goods or services and to help mitigate any contractor performance problems. An agency’s monitoring of a contractor’s performance may serve as a basis for past performance evaluations that are considered during future source selections. State has a plan for monitoring and evaluating the performance of its contractors providing training in Africa. The quality assurance plan specifies the desired outcomes of the training provided, performance standards that the contractors are to meet, and State’s process for evaluating contractors’ performance. Although State’s quality assurance plan identifies the process for evaluating contractors’ performance, State officials implementing the program are unclear which office at State is responsible for recording the evaluation in the contractor performance system, as required by State regulations. State’s contracting officials were uncertain whether evaluations of past contractor performance for training in Africa had been entered in the system by the program management team. An official from the ACOTA program management team told us they are not responsible for entering performance evaluations in the contractor performance system, in part because they are unable to access the system. However, evaluations of contractor past performance are prepared and maintained by this team, according to this official. State provided some evidence that indicated that evaluations of contractors’ past performance had been prepared by the ACOTA program management team and considered when new task orders were placed on the existing contract for training in Africa. However, we did not fully assess the extent to which the evaluations of contractors’ performance had been completed and considered in awarding training task orders. We did not examine State’s compliance with its performance plan and the extent to which past performance evaluations were used to award training task orders. State cannot fully account for the training activities of more than 2,700 military peacekeeping instructors trained by the GPOI program. Further, State has supported the training of more than 1,300 stability police instructors at COESPU but cannot account for either the training or the deployment activities of these instructors. The activities of trained instructors provide a measure for the progress made in building a partner country’s capacity to sustain its peacekeeper deployments in the future. Although State and DOD have trained more than 2,700 military peacekeeper instructors to continue training in their respective countries, State cannot fully determine whether this training has taken place. For example, as of April 2008, State had only been able to identify training that had occurred by GPOI-trained instructors for two countries. The deployment of peacekeepers trained by these instructors is another measure of the program’s ability to increase peacekeeping contributions. In March 2008, 47 GPOI partner countries had military peacekeepers and observers deployed to UN peacekeeping missions. State cannot fully identify how many troops from these 47 countries, if any, were trained by the 2,700 GPOI-trained military peacekeeping instructors. COESPU has estimated that instructors trained at its training program will train an additional 4,500 stability police, according to COESPU documents and officials. The training activities of COESPU graduates are one measure of the efforts by Italy and the United States to increase worldwide capacity for stability police. Although State has supported the training of more than 1,300 stability police instructors at COESPU, State and COESPU have been unable to fully account for training conducted by these instructors in their home countries. Specifically, State has only been able to account for the indigenous training of one stability police unit conducted by COESPU graduates from one country, according to a State document. State and COESPU also are unable to identify if stability police units deploying to peacekeeping missions were trained by graduates from COESPU or if these graduates have deployed to missions themselves. First, State has been able to account only for the deployment of a stability police unit from the one country in which the unit was trained by graduates of COESPU, as of April 2008. Second, although COESPU has trained some instructors that are likely to lead stability police units in peacekeeping operations, State and COESPU cannot fully account for the deployments of these instructors. Specifically, State can account for the deployments of 13 of 236 students from India who were trained at COESPU, as of April 2008. According to the GPOI strategy and State officials, before countries and their peacekeepers can receive GPOI training and other assistance, they must generally meet certain criteria including having an elected government, an acceptable human rights record, and the willingness to participate in peace support operations. GPOI partner countries generally met the criteria for inclusion in the program. However, for 24 of the 52 countries, State’s human rights reports for 2007 identified human rights violations by security personnel. To comply with U.S. laws, State must verify that it does not have credible evidence that the foreign security forces identified to receive assistance have committed gross violations of human rights prior to the provision of training. We found that military peacekeepers and stability police were not always screened or were not properly screened for human rights abuses, as required by State guidance for the legislative requirements. State, in consultation with DOD, has selected 52 partner countries to participate in GPOI based on a list of criteria identified in the program’s strategy. Partner countries should have an elected government and acceptable human rights record, willingness to participate in peace support operations, and agreements to ensure that U.S. training and equipment are used for the purposes intended, according to agency documents. State and DOD periodically review whether partner countries continue to meet these criteria and may suspend GPOI funding in cases where criteria are not met, according to agency officials. For example, funding of GPOI activities for Thailand was suspended after a military coup overthrew the democratically elected government in 2006. However, some DOD officials expressed concern about the selection of certain countries and the criteria used to select countries. For example, officials in the African and Pacific commands and the Joint Staff said they did not agree with the selection of two countries in Africa and Asia and they felt it would limit available resources for ongoing activities in other countries. In another example, a DOD official said that additional criteria, such as the military HIV infection rates or attrition rates, should be taken into account in selecting partner countries because these factors affect the country’s ability to deploy. For the training of stability police at COESPU, Italy and the United States jointly decided which countries would participate. We found that most of the 52 partner countries met the participation criteria, but 24 countries had identified human rights violations by security personnel in State’s human rights reporting for 2007. State officials cited a number of reasons to justify the inclusion of these countries in GPOI: State did not consider the human rights violations for some countries to be a systemic problem in the military or stated that these violations were associated with private security companies, not with the countries’ military personnel; some countries were selected to support other strategic goals; and participation would allow some countries to receive human rights training not otherwise available. In addition, State officials said that the selection criteria are recommended but not required by the program and the United States engaged in diplomatic discussions with these countries to improve their human rights records. These officials indicated that the vetting of trainees for human rights abuses guards against the training of any human rights violators. Finally, State also formally submits a list of GPOI partner countries each year to Congress to ensure that Congress has oversight over the list of partner countries. Before providing any training or equipment support under GPOI, State must verify that it does not have credible evidence that the foreign security forces identified to receive assistance have committed gross violations of human rights. In our review of vetting documentation of 2007 GPOI trainees from 14 countries identified in State reporting to have documented human rights violations by security personnel, we found cases where individuals and units that received training were not properly vetted. Each of the annual Foreign Operations Appropriations Acts from 1998 to 2006 included a provision, commonly referred to as the Leahy Amendment, that restricted the provision of assistance appropriated in these acts to any foreign security unit when the Secretary of State has credible evidence that the unit has committed gross violations of human rights. In the fiscal year 2008 Consolidated Appropriations Act, the Foreign Assistance Act was permanently amended to restrict the provision of assistance to foreign security units when credible evidence exists of gross violations of human rights by that unit. While the legal provisions restrict funding to “any unit of the security forces of a foreign country,” State guidance is to screen or vet individuals who are identified for individual training or who are members of newly formed or composite units. Should an entire existing unit receive the training together, State guidance requires vetting of the unit name and commander only. To implement these legislative restrictions, State’s guidance calls for U.S. embassies and State bureaus to screen individuals or units proposed for training to determine whether these foreign security forces have committed gross human rights violations. We found that State did not vet some individuals and units for human rights violations before training. Specifically, all 81 military peacekeepers who received training in 2007 from Honduras were not vetted before participating in GPOI-funded training courses. In addition, 16 military peacekeepers and stability police from Bangladesh, India, Indonesia, Nigeria, and Sri Lanka were not vetted out of 382 trained in those countries in 2007, and a 665-person Nigerian battalion trained by GPOI was not vetted. In response to our findings, State officials have begun the vetting process for the individuals from Honduras who received GPOI training. We also found that some individuals who received training in 2007 were not screened in accordance with State’s guidance for vetting newly formed or composite units. Specifically, the commanders of seven composite units in Niger, Nigeria, and Uganda and the commander of the ECOWAS standby force were screened for human rights violations, but the individual members of these units were not vetted, as required by State guidance. As a result of these lapses in vetting, it is possible that State and DOD have provided training to security personnel who committed human rights violations. State and DOD officials in the countries we visited said they face challenges in conducting vetting prior to training due to the difficulties both in getting the names of individuals in units prior to training and in having sufficient time to properly conduct vetting in the country and in Washington, D.C. State officials in the ACOTA office told us they have taken corrective action to prevent further vetting oversights by creating a new position in their office that would be responsible, in part, for monitoring the vetting data for all training provided in Africa. The growth of peace support operations has increased the importance and need for more comprehensive measures to ensure worldwide capability and capacity for responding to peacekeeping demands. The United States has taken the lead in the G8 goal to build this peacekeeping capability worldwide through GPOI. Since 2005, State and DOD, focusing the majority of GPOI resources on efforts in Africa, have undertaken numerous activities to increase countries’ ability to serve in peacekeeping missions, including the training of nearly 40,000 military peacekeepers. However, it appears that GPOI will fall short of reaching certain established goals, such as training 75,000 military peacekeepers by 2010. State also has faced some challenges in supporting COESPU’s need for additional staff, accounting for the delivery and transfer of nonlethal training equipment to partner countries, evaluating the quality and effectiveness of its training program, and screening trainees for human rights abuses. Addressing these challenges will enhance GPOI’s effectiveness as the program nears the end of its 5-year authorization and will help ensure that U.S. resources are focused on building partner countries’ capabilities to provide quality peacekeepers worldwide. To meet the G8 commitment to expand global capabilities for peace support operations, GPOI activities that extend beyond 2010 will require more emphasis on developing the capabilities of African partners to maintain peacekeeping operations on their own. To enhance GPOI’s effectiveness, better identify program outcomes, and ensure proper screening for human rights violations, we recommend that the Secretary of State take the following six actions: 1. Work in consultation with DOD to assist Italy in staffing the key unfilled positions at COESPU to better evaluate progress made and monitor results. 2. Monitor implementation of new procedures to account for delivery and transfer of nonlethal training equipment to partner countries on an ongoing basis. 3. Provide additional guidance to U.S. missions to help the United States and Italy collect data on the training and deployment activities of COESPU graduates in their home countries. 4. Develop and implement, in consultation with DOD and in accordance with the GPOI strategy, the use of standard military task lists and related training standards to enable program managers to evaluate the quality of training and measure the proficiency of trainees program- wide. 5. Ensure that the evaluations of contractor performance of training in Africa are properly recorded into the contractor performance system as required by agency regulations. 6. Develop a system for monitoring the vetting activities for all GPOI training and ensure that all individuals in composite units are vetted for human rights violations, as required by State policy. To ensure that GPOI activities enhance the capabilities of countries to maintain peacekeeping operations on their own, we also recommend that the Secretary of State, in consultation with DOD, take the following two actions: 1. Assess estimated resources and time frames needed to complete peacekeeping skills and infrastructure activities in Africa by 2010. 2. Ensure that any plans for extending GPOI activities beyond 2010 identify sufficient resources for developing long-term peacekeeping skills and infrastructure in Africa. We provided draft copies of this report to the Departments of State and Defense. We received written comments from State and DOD, which we have reprinted in appendixes VII and VIII, respectively. State and DOD provided technical comments which we have incorporated in the report, as appropriate. State concurred or partially concurred with seven of the eight GAO recommendations and provided additional information to highlight the program’s achievements. State did not concur with GAO’s recommendation to develop a method for evaluating GPOI training. State notes that methods already exist to evaluate the quality of the training program and measure the proficiency of trainees. We disagree that State’s current evaluation methods address our recommendation. State has not developed military task lists and associated training that can be applied at all GPOI training sites; sites currently use varying standards to assess the proficiency of trainees. DOD agrees with the need for greater standardization and more analysis of trainee performance. We modified the recommendation to clarify the need for GPOI-wide standard military tasks and related training that State, in consultation with DOD, should develop in accordance with the commitments made in the GPOI strategy. State also commented that it now projects that GPOI will train 75,000 peacekeepers by July 2010 based on new training rates. We were unable to validate State’s new data since as recently as May 2008, program officials indicated that slow expenditure rates would delay State’s efforts to reach the 2010 training goal. DOD agreed with the findings and concurred or partially concurred with our recommendations. DOD agreed with the need for greater standardization and more analysis of trainee performance and agreed that State should work with DOD and Italy to validate personnel shortfalls at COESPU and fill the identified positions. DOD also stated that an assessment of resources and time frames required to achieve GPOI objectives should apply to all regions engaged by the GPOI program. We did not revise this recommendation because it is intended to address our finding that State is unlikely to complete skills and infrastructure activities in Africa by 2010. We are sending copies of this report to the Secretaries of State and Defense. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8979 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix IX. In response to a congressional mandate in the fiscal year 2008 Defense Authorization Act to review the Global Peace Operations Initiative (GPOI), we assessed (1) the progress made in meeting GPOI goals, (2) whether State is consistently assessing the quality and effectiveness of the training program, and (3) the extent to which countries meet program criteria and whether program participants are adequately screened for human rights abuses. We attended a planning conference in October 2007 in Washington, D.C., for GPOI implementers and an October 2007 conference with Group of Eight (G8) members and other partners to discuss worldwide efforts to enhance peacekeeping. Our scope of work included the Departments of State (State) and Defense (DOD) in Washington, D.C.; U.S. Combatant Commands for Africa, Europe, Pacific, and Southern Hemisphere; and site visits to Ethiopia, Ghana, Guatemala, Italy, Mongolia, Senegal, and Sierra Leone. We observed training and visited facilities refurbished with GPOI funds during site visits to Ghana, Guatemala, Italy, Mongolia, and Senegal. In selecting field work countries, we considered the following criteria: funding allocations, number of military peacekeepers trained, number of trained peacekeepers that have deployed to missions, training schedules, and unique characteristics, such as the location of Italy’s training school for stability police and the equipment depot in Sierra Leone. We selected these countries in Africa, Asia, and Central America because they had received more funding allocations and had trained and deployed more troops than other GPOI partner countries in those regions and also were scheduled to conduct training during our visits. We selected Italy to assess U.S. support to stability police training at the Center of Excellence for Stability Police Units (COESPU), Germany to interview officials from the U.S. European and African commands, Sierra Leone to assess the GPOI equipment depot, and Ethiopia to assess GPOI activities with the African Union. To assess the progress GPOI made in meeting its goals, we reviewed data gathered by State on the number of troops trained and the equipment provided, reports from agencies and COESPU of activities at COESPU, and monthly and annual progress reports. We compared the information in these sources with benchmarks established in the GPOI strategy for the goals and objectives of the program. In addition, we collected and reviewed information on obligations and expenditures of GPOI funds and surveyed the combatant commands responsible for implementing GPOI to estimate any additional funds they used to support GPOI activities. To assess the reliability of State’s data on troops trained and equipment provided, as well as obligations and expenditures, we reviewed relevant documentation and spoke with agency officials, including the GPOI program assessment team, about data quality control procedures. We determined that the data were sufficiently reliable for the purposes of this report. To determine whether State is consistently assessing the quality and effectiveness of the GPOI training program, we identified the training provided and determined what training assessments were conducted. We reviewed training programs of instruction, training contracts and task orders, and related training documents. We also interviewed State and DOD officials in Washington, D.C., and during site visits to the countries listed, as well as trainers in Ethiopia, Ghana, Guatemala, Mongolia, and Senegal. To identify the training provided at COESPU, we reviewed training documents and conducted interviews with Italian officials at COESPU. To identify the measures that State has in place to oversee contractor activities for training and advisor support in Africa, we reviewed contracts and related documents and interviewed State officials, including officials from the Office of Acquisitions Management and the Bureau of African Affairs. To identify the activities of trained instructors and stability police, we reviewed data gathered by State on the deployments of trained military peacekeepers, including instructors and stability police instructors, and data gathered by State and COESPU on the training activities of these instructors. We also interviewed Italian officials at COESPU, State officials, and training officials and contractors. We reviewed relevant documentation and spoke with agency officials, including the GPOI program assessment team, about data quality control procedures. We identified a limitation in the data on deployments of trained peacekeepers. State is not able to obtain the individual names of those who deploy to compare with the rosters of those trained under GPOI. However, based on the fact that State can identify which of the units trained under GPOI has deployed, and that any individual who joins the peacekeeping unit subsequent to its training by GPOI may receive additional training from their unit officers, we determined that the data on military peacekeepers deployed were sufficiently reliable for the purposes of reporting the deployments of GPOI-trained peacekeeping. For the data on the activities of instructors trained under GPOI, we found that State and COESPU did not have complete or reliable data for the purposes of identifying comprehensive information about the training activities of these individuals in their home country. We also found that COESPU and State did not have sufficient information to identify the deployment or training activities of stability police instructors trained at COESPU. To determine the extent to which countries meet program criteria and whether participants are adequately screened for human rights abuses, we examined the GPOI strategy and interviewed State and DOD officials in Washington, D.C., and during site visits to the countries listed previously. To determine how human rights violations were taken into account, we compared State’s 2007 human rights reports, which identified countries with documented human rights violations by security personnel, with the list of GPOI partner countries. We also reviewed State’s human rights reports to identify whether partner countries had an elected government. To determine whether GPOI countries showed a willingness to deploy, we examined which countries had deployed troops on United Nations (UN) peacekeeping missions. To ensure that end-use and re-transfer provisions for equipment and training were agreed to, we reviewed whether Section 505 agreements were signed with each of the countries. We interviewed State officials and collected additional information for countries that did not clearly meet some of these criteria. In addition, we reviewed State documents identifying human rights vetting procedures. We selected 14 countries with documented human rights violations by security forces that received training in 2007 and assessed whether individuals and units trained in these countries were vetted for human rights violations. To do this, we compared vetting records from State for the training provided to individuals and units from these countries with the training rosters provided by State. We conducted this performance audit from August 2007 to June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. There are 52 countries that received GPOI training, equipment, or other support to enhance their peacekeeping capabilities and contributions. Table 3 provides a list of partner countries that received support for their military peacekeeping, stability police, or both, from 2004 to 2008, as of April 2008. The members of the G8 and other nations have supported the commitments of the 2004 G8 Summit and GPOI. The information below provides additional information on the nature of contributions made by the international community but does not provide a comprehensive list of all contributions made by the G8 and other nations. G8 nations have contributed to peace support operations in a number of ways, including the training and equipping of military peacekeepers, individual police, and stability police; supporting the development of peacekeeping doctrine; providing funding to support national and regional peacekeeping training centers; providing funding and logistical support to regional organizations; and establishing a stability police training school. For example, three G8 nations have provided instructors to the COESPU, according to State and COESPU officials. In another example, countries have provided equipment to support the troops deployed to peacekeeping missions. Contributions of G8 nations are largely for activities in Africa or in support of peacekeeping missions in this region, according to State documents. According to State, in 2007, the G8 and other nations identified 760 peacekeeping-related programs, events, and activities that member states were conducting in Africa alone. The G8 and other nations also have directly contributed to the U.S. GPOI program. According to State, 19 countries have contributed to the U.S. program, primarily by providing training instructors to support GPOI- funded training. For example, 4 countries provided instructors to the Central American peacekeeper training school in Guatemala and 14 countries provided instructors to the multilateral peacekeeper training exercises held in Mongolia in 2006 and 2007. State paid travel costs for all the training instructors for the Central American training. For the Mongolia exercises, seven countries paid their own way, and State and DOD paid for the remaining countries. Two countries also have provided funding and personnel support directly to State for GPOI. Specifically, the Netherlands has committed to provide State with $7 million per year for 3 years, to be used for peacekeeping training and equipment activities in Africa. According to State officials, about $5.3 million was received at the end of 2007, and they expect to receive the remaining $1.7 million for 2007 in the near term. State officials in the Bureau of African Affairs told us that two additional countries have indicated plans to provide a total of about $37 million directly to State to support peacekeeping missions in Darfur and Somalia. State and DOD have trained nearly 40,000 military peacekeepers from a total of 43 countries and the Economic Community of West African States (ECOWAS). As of April 2008, about 56 percent of GPOI-trained military peacekeepers have deployed to peacekeeping missions, and the majority have deployed from African partner countries. As table 4 shows, of the 39,518 military peacekeepers trained by GPOI, almost 22,000 have deployed to peacekeeping missions. According to State, these peacekeepers have deployed to 12 United Nations or African Union missions, as well as other missions not supported by the UN or the African Union. African partner countries have deployed the majority of GPOI- trained military peacekeepers—97 percent or 21,435—and the remaining 3 percent have deployed from partner countries in Asia. An additional 6,277 military peacekeepers from African partner countries were trained in anticipation of deployments to UN missions in the near future, according to State reporting. Table 5 provides information on the type of equipment that has been or may be provided to partner countries to support training and deployments for peacekeeping. Table 6 provides information on the type of equipment that has been provided to COESPU to support stability police training. State officials indicated that military peacekeepers keep some of the individual training equipment for use during deployments. Table 7 provides information on the type and status of activities that State and DOD have conducted to build skills and infrastructure to meet partner countries’ long-term needs to sustain peacekeeping. The following are GAO’s responses to the Department of State’s letter dated June 18, 2008. 1. State asserts that GPOI is on track to meet its objectives with over 35,000 peacekeepers deployed to 18 peacekeeping operations. We disagree that 35,000 peacekeepers have deployed to 18 missions with the training or support of GPOI. State’s assertion conflicts with GPOI evaluation team data that identified 22,000 peacekeepers trained by GPOI that deployed to 12 UN or AU missions, as well as other missions not supported by the UN or AU, as of April 2008. State’s statistics include peacekeepers GPOI trained that have not deployed, supported but not trained by GPOI, or troops deployed to Iraq and Afghanistan (non-UN missions). Appendix IV provides additional information on the peacekeeper deployments of GPOI partners. 2. State asserts that GPOI objectives will be achieved under the current conditions and within projected resource levels. We disagree with this assessment because according to State’s own training projections it is not likely to train 75,000 military peacekeepers by 2010, faces delays in providing support of nonlethal equipment to deployed peacekeepers, and is unlikely to complete planned skills and infrastructure activities in Africa by 2010. In addition, State has not provided additional support for requested staff positions at COESPU that would facilitate the evaluation of progress made at COESPU. 3. State now projects that GPOI will train 75,000 peacekeepers by the third quarter of 2010 based on new training rates and asserts that we do not provide a realistic projection. We were unable to validate this information. As of April 2008, the number of military peacekeepers trained is lower than the target number needed to meet the goal of 75,000 by the end of 2010. As recently as May 2008, officials from the GPOI office in the Bureau of Political-Military Affairs and its GPOI evaluation team indicated that slow expenditure rates related to training rates would delay their efforts to reach the goal by 2010. Accordingly, we are unable to validate State’s new projections provided in its comments to this report. 4. State asserts that it has contributed $10.5 million to COESPU and plans to provide an additional $4.5 million. We disagree that this is a contribution already provided to COESPU. State has obligated $15 million for COESPU, which includes the $10.5 million and $4.5 million, but has only provided $9 million of that amount to COESPU, according to State funding data identifying expenditures as of April 2008. 5. State has stated that the United States established a virtual donors’ coordination mechanism to enable deploying nations to facilitate donor assistance in transportation and logistics support. We agree that a communication system has been established, however, we note that the mechanism for facilitating this support is an e-mail system. We also note that the system was established in the fall of 2007 and that, as of April 2008, only one request had been communicated by State to donors through this system, according to the State officials responsible for this system. 6. State presents information on a number of activities that it asserts were conducted under GPOI to improve the capabilities of regional organizations to plan, train for, and execute peacekeeping missions. We disagree that GPOI has conducted all of these activities and believe that the activities listed in State’s comment include a combination of planned and completed activities. In appendix VI we have presented the GPOI activities that have been completed to build skills and infrastructure for peacekeeping in support of the GPOI objective to assist partners in achieving self-sufficiency and maintaining GPOI proficiencies. The information we have presented was obtained from expenditure information and data provided by the GPOI assessment team and GPOI program office. To confirm activities that were completed as of April 2008, we crosschecked reported information by the GPOI program with GPOI program implementers responsible for implementing these activities, including the Africa Bureau and its ACOTA program, and U.S. African, Pacific, and Southern Commands. 7. State asserts that methods already exist for evaluating the quality of training and measuring the proficiency of trainees in critical skills. We disagree that these methods address our recommendation. State has not developed military task lists and associated training that can be applied at all GPOI training sites, although the GPOI strategy in 2006 identified the need for the development of military task lists and related military training standards to contribute to standardization, interoperability, and sustainability, and to ensure the proper use of resources. The strategy also indicated that developing such standards would help efforts to evaluate the overall effectiveness of GPOI training programs, events, and activities. We assert that there is value in evaluating the performance of trainees, in the areas where there are consistencies in the training, against a standard level of proficiency in the skills taught during their training, in order to identify the quality of training provided across the program and to identify the proficiency of trained troops program-wide. We modified the recommendation to clarify the need for GPOI-wide standards to provide program managers with the ability to measure proficiency of GPOI-trained troops program-wide and in accordance with the commitments made in the GPOI strategy. 8. State asserts that its process for vetting composite units to prevent potential recipients from receiving training where there is credible evidence of committed gross violations of human rights is effective and that our findings on the vetting of composite units trained under GPOI are unfairly applied against an updated agency policy on vetting composite units. We disagree. The seven composite units we identify in this report were vetted and received training after the policy change in April 2007. We identified these units in our review of vetting records provided by State’s ACOTA office and training rosters of individuals trained provided by State’s GPOI evaluation team. According to the data provided by State, three composite units from Niger received training in August 2007 and November-December 2007, one composite unit from Nigeria received training in September-October 2007, and three composite units from Uganda received training in July 2007. Records for these units indicate that vetting was completed between June 2007 and November 2007. Key contributors to this report include Audrey Solis, Assistant Director; Monica Brym; Justin Monroe; and Diahanna Post. Technical assistance was provided by Ashley Alley, Johana Ayers, Joseph Brown, Lynn Cothern, Barry Deweese, Nisha Hazra, Chris Kunitz, Isidro Gomez, Matthew Reilly, Elizabeth Repko, Ronald Schwenn, Jay Smale, Adrienne Spahr, Barbara Steel-Lowney, Laverne Tharpes, and Heather Whitehead.
|
In 2004, in response to the Group of Eight (G8) Sea Island Summit, the United States established the Global Peace Operations Initiative (GPOI), a 5-year program to build peacekeeping capabilities worldwide, with a focus on Africa. Since 2005, the Department of State (State) has allocated $374 million and selected 52 countries to participate in the program. Congress mandated that GAO assess and report on the initiative. This report assesses (1) progress made in meeting GPOI goals, (2) whether State is consistently assessing the quality and effectiveness of the training, and (3) the extent to which countries meet program criteria and whether trainees are adequately screened for human rights abuses. GAO assessed State and Department of Defense (DOD) data and program documents, interviewed U.S. and host country officials, and conducted field work in eight countries. State and DOD have made some progress in achieving GPOI objectives in three principal areas: training and equipping peacekeepers, providing equipment and transportation for peacekeeping missions, and building peacekeeping skills and infrastructure, but challenges remain in meeting these goals. First, nearly 40,000 military peacekeepers have been trained and some training equipment has been provided. However, State is unlikely to meet the goal of training 75,000 military peacekeepers by 2010 and has encountered problems in accounting for the delivery of training equipment to countries. Second, State supports an equipment depot in Africa and has supplied equipment for missions in Haiti, Lebanon, Somalia, and Sudan, but has been delayed in providing some equipment in support of these missions. Third, State and DOD have trained 2,700 military peacekeeping instructors, conducted several multinational peacekeeping exercises, and refurbished some training centers. However, State has targeted a smaller share of resources to build peacekeeping skills and infrastructure than for training and equipping peacekeepers in Africa in comparison to other regions, in part due to needs and capabilities of the region and a focus on training African peacekeepers for current missions. Of the $98 million State has spent in Africa, 12 percent was spent on building skills and infrastructure needed for long-term peacekeeping capabilities, compared to 20 percent to 51 percent in other regions. While 56 percent of trained military peacekeepers--primarily from Africa--have deployed to peacekeeping missions, State faces challenges in assessing the proficiency of trained peacekeepers against standard skills taught in training and accounting for the activities of trained instructors. Although GPOI training standards follow U.S. military doctrine and United Nations requirements, State does not have a program-wide standard to assess the proficiency of military peacekeepers in skills taught. Further, State is unable to fully account for the training activities of the trained instructors. Collectively, these program limitations result in State's inability to assess the overall outcomes of its program in providing high-quality, effective training. State, in consultation with DOD, has selected 52 partner countries that generally meet program criteria, but in some cases State did not screen trainees for human rights abuses. For 24 countries, State's human rights reporting identified documented human rights violations by security forces in 2007, and GAO found that peacekeepers were not always screened or were not properly screened for human rights abuses. For example, we found that 81 individuals from one country received military training but were not screened for human rights violations.
|
In the international sector, the routes that airlines can fly, the frequency of their flights, and the fares they can charge are governed by 72 bilateral agreements between the United States and other countries. Many of these agreements, including the accord with the United Kingdom, are very restrictive. Since the late 1970s, U.S. policy has been to negotiate agreements that substantially reduce or eliminate bilateral restrictions. DOT’s Office of the Assistant Secretary for Aviation and International Affairs, with assistance from the State Department, is responsible for negotiating these agreements and awarding U.S. airlines the right to offer the services provided for in those agreements. In January 1993, DOT granted antitrust immunity to the Northwest/KLM alliance in conjunction with the U.S.-Netherlands open skies accord. In April 1995, DOT issued the U.S. International Aviation Policy Statement in which it reiterated its desire for open skies agreements and endorsed the growing trend toward alliances between U.S. and foreign airlines. Since issuing that statement, DOT has negotiated a number of more liberal agreements, including open skies accords with Germany and numerous smaller European countries. In 1996, the agency granted antitrust immunity to the alliances between United and Lufthansa, which is Germany’s largest airline, and between Delta and several smaller European carriers. In announcing their proposed alliance, American Airlines and British Airways emphasized that they are at a competitive disadvantage with these alliances because the airlines in those alliances can, among other things, better coordinate service and jointly set fares. Despite success in negotiating open skies agreements throughout much of Europe, DOT has had very little success with the United Kingdom, our largest aviation trading partner overseas. The current U.S.-U.K. accord, commonly known as “Bermuda II,” was signed in 1977 after the British renounced the prior agreement. Bermuda II restricts the number of U.S. airlines that can serve Heathrow to two carriers—currently American Airlines and United Airlines. DOT has expressed increasing dissatisfaction with Bermuda II and attempted to negotiate increased access for U.S. airlines to Heathrow. Negotiations with the British take on particular importance because of the size of the U.S.-U.K. markets. In 1996, 12 million passengers traveled on scheduled service between the United States and the United Kingdom, which is more than twice that for the U.S.-Germany markets and three times that for the U.S.-France markets. Competition is restricted in the U.S.-U.K. markets because Bermuda II, among other things, sets limits on the amount of service airlines can provide and prevents all U.S. airlines, except American and United, from flying to and from Heathrow. These restrictions on competition result in fewer service options for U.S. and British consumers. In addition, they also likely result in higher airfares. However, the extent to which airfares are higher is uncertain. DOT does not have data on the fares paid by passengers flown by BA or Virgin Atlantic if those passengers’ itineraries did not involve a connection with a U.S. carrier, because it has generally not required foreign airlines to report data from a sample of their tickets, as it requires U.S. airlines to do. Bermuda II’s limits on competition also disproportionately affect U.S. airlines. In contrast to the continuing restrictions placed on U.S. airlines, the United Kingdom was successful in negotiating increased access for British carriers to the U.S. markets in the early 1990s. Partly as a result, between 1992 and 1996, the British carriers’ share of the U.S.-U.K. markets rose from 49 percent to 59 percent. As figure 1 shows, this gain by British Airways and Virgin Atlantic has come primarily at the expense of the U.S. airlines that are not allowed to serve Heathrow. The proposed AA/BA alliance is subject to review by the European Commission, several agencies within the U.K. government, and DOT. The European Commission, the U.K. Department of Trade and Industry, and DOT have decision-making authority over the proposed alliance. The U.K. Office of Fair Trading and the U.S. Department of Justice’s Antitrust Division (Justice) have advisory roles and provide analysis and comments to their respective decisionmakers. According to officials, the process for reviewing the AA/BA alliance is complicated by the fact that it is new and untested and some European laws have not previously been applied to airline alliances. The European regulatory agencies have nearly completed their reviews, and the formal U.S. review has yet to get under way. Both the European and the U.S. reviewers have access to extensive information—including confidential proprietary data—to evaluate the competition issues arising from the AA/BA and other alliances. This information includes data on airline capacity, market shares on specific routes, and passenger travel statistics. In July 1996, because of concerns about the anticompetitive effects of the alliances, the European Commission’s Directorate General for Competition initiated a review of the proposed AA/BA alliance and three other ongoing alliances: United/Lufthansa/SAS; Delta/Swissair/Sabena/Austrian Airlines; and Northwest/KLM. This review is examining a broad range of competition issues on AA/BA, including access to slots and facilities at Heathrow Airport; the frequency of service offered by AA and BA, which would dominate the market at Heathrow; and AA/BA’s sales and marketing practices, such as frequent flier programs, travel agent commission overrides, corporate incentive agreements, and computer reservation system practices. The European Commission’s Directorate General for Competition expects to issue its draft remedies for addressing the anticompetitive effects of AA/BA within the coming weeks. Officials added that their reports on other alliances should be done soon afterwards. Various parties then have the opportunity to provide comments and possibly participate in oral hearings on the draft remedies. After it obtains comments from the interested parties, the Directorate General for Competition prepares a document outlining its recommendations on whether to approve the alliance with conditions or to withhold approval, and submits the document to the European Commission’s Member States Advisory Committee for review. After the Advisory Committee’s review, the Directorate General for Competition incorporates appropriate comments and prepares its draft final ruling, which either lays out the conditions that must be met in order for the alliance to be approved or disapproves the alliance. It becomes the ruling of the Commission when it is adopted by the European Commission’s College of Commissioners. Thus, the European Commission’s final decisions are not expected for several more months. The U.K. Department of Trade and Industry is conducting its own review of the proposed AA/BA alliance. It has asked the U.K. Office of Fair Trading to investigate and provide advice on the proposed alliance. The Office of Fair Trading investigation, which began in June 1996, examined a broad range of issues raised by the proposed alliance, including competitive impacts of the alliance on routes, hubs, and networks within the U.S.-European markets; the frequency of service in the U.S.-U.K. markets; the pooling of frequent flier programs; and access to slots at Heathrow. The Office of Fair Trading issued a draft report in December 1996 that called for AA/BA to, among other things, make available to other airlines up to 168 slots per week at Heathrow for use only on U.S.-U.K. transatlantic services and allow third-party access to their joint frequent flier program in those cases in which that party does not have access to an equivalent program. The report took into account the views of third parties on conditions that should be placed on the alliance to remedy competition concerns. Before they provide their final advice on the proposed AA/BA alliance, the U.K. Office of Fair Trading is awaiting the European Commission’s publication of its draft remedies. The Secretary of State for Trade and Industry will decide on the case after receiving final advice from the Office of Fair Trading. The U.K. agencies reviewing the proposed AA/BA alliance are in contact with the European Commission and have a duty to cooperate with it. If the United Kingdom’s decision on the proposed AA/BA alliance differs from the European Commission’s, the differences will have to be reconciled. According to European Commission officials, this could require a judgement by the European Court of Justice in Luxembourg, which ultimately judges the sound application of the European Union’s treaties by the institutions of the Union or the member states. In the United States, DOT has the authority not only for approving airline alliances, but also for granting those alliances immunity from the antitrust laws. In determining whether to grant approval and antitrust immunity for an airline alliance, DOT must find that the alliance is not adverse to the public interest. DOT cannot approve an agreement that substantially reduces or eliminates competition unless the agreement is necessary to meet a serious transportation need or to achieve important public benefits that cannot be met or that cannot be achieved by reasonably available alternatives that are materially less anticompetitive. Public benefits include considerations of foreign policy concerns. In general, DOT has found code-sharing arrangements to be procompetitive and therefore consistent with the public interest because they create new services, improve existing services, lower costs, and increase efficiency for the benefit of the traveling and shipping public. As with the other international code-sharing alliances that the United States has approved, DOT officials explained that they will not approve AA’s and BA’s proposed code-sharing alliance with antitrust immunity unless the United States has reached an open skies agreement with the United Kingdom. According to U.S. law, DOT is to give the Attorney General and Secretary of State “an opportunity to submit written comments about” the application. In practice, DOT and Justice officials told us that they stay in contact throughout the application process regarding their respective analyses of airline alliances. Justice’s role is advisory and is performed pursuant to the Sherman Antitrust Act and the Clayton Act, which set forth antitrust prohibitions against restraints of trade. To determine if a proposed alliance is likely to create or enhance market power and allow firms to maintain prices above competitive levels for a significant period of time, Justice applies its Horizontal Merger Guidelines, which describe the analytic framework and the specific standards to be used in analyzing mergers and alliances. A key concern is whether entry into the market would deter or counteract a proposed merger’s potential for harm. DOT officials told us that in reviewing other code-sharing alliances, the Department did not apply any written set of guidelines in its analysis. Rather, DOT has discretion in deciding the factors it will analyze and in past applications for international code-sharing alliances has considered issues raised in petitions by interested parties. Those issues generally involved market power between particular hub airports, except in one instance. In response to United’s application for antitrust immunity in its code sharing with Lufthansa, TWA contended that Lufthansa’s control over travel agents, both through dominance of the computer reservation system and through commissions and override payments, was a serious impediment to new airlines’ entry into the U.S.-Germany marketplace. In making its final decision, DOT addressed the concern about the computer reservation system, but wrote that other forums were more appropriate for addressing the other concerns. DOT has considered, but not always completely agreed with, Justice’s comments on the extent to which particular code-sharing alliances pose threats to competition in individual markets. In the case of United/Lufthansa, for example, Justice was concerned that competition could be reduced in two nonstop markets—Chicago-Frankfurt and Washington D.C. (Dulles)-Frankfurt. DOT agreed, and “carved out” (i.e., withheld antitrust immunity from) specific airline operations in those two markets. In considering Delta’s proposed alliance, Justice identified seven nonstop markets that raised concerns of reduced competition. DOT agreed with Justice on three markets (Atlanta-Brussels, Atlanta-Zurich, and Cincinnati-Zurich) and withheld antitrust immunity for specific operations there; DOT generally disagreed with Justice and imposed different conditions on the other four city-pairs, each of which involved travel from New York. In the case of the proposed AA/BA alliance, U.S. reviews are essentially on hold. DOT cannot move forward with its review of the alliance until AA and BA file the necessary documents to make their application complete. DOT officials do not believe that AA and BA will complete their application until after the European Commission issues its draft remedies on the alliance, and BA officials confirmed that to us. Once DOT determines that the application is complete, interested parties—including Justice—will have 30 business days to comment on the alliance. Interested parties and AA/BA will then have another opportunity for rebuttal comments. According to its regulations, DOT may order a full evidentiary hearing at the end of the comment period. Requests for DOT to hold an oral evidentiary hearing must specify the material issues of fact that cannot be resolved without such a hearing. However, DOT has the discretion by statute whether to hold a hearing, even if requested to do so by the Attorney General or Secretary of State. Although the AA/BA application is not complete, DOT has already proposed holding an oral hearing before a departmental “decisionmaker” so that interested parties can express in person their particular opinions and views on the issues concerning the AA/BA alliance. AA and BA have characterized any type of hearing as merely a delaying tactic. Six airlines opposing the proposed AA/BA alliance, on the other hand, have argued that the kind of hearing DOT has proposed is not sufficient; they contend that questions of fact could only be adequately explored and resolved with an oral evidentiary hearing before an administrative law judge. For example, AA and BA have contended that slots are easily obtainable at Heathrow and that Gatwick is an available and competitive alternative. Other airlines have testified that it is impossible to obtain slots at Heathrow that are timely and competitive, that Gatwick is full, and, in any event, that Gatwick is not a reasonable alternative to Heathrow, especially for business travelers. DOT has told us that it may reconsider its proposed schedule for reviewing the AA/BA alliance, along with the type of hearing it would hold. We are not in a position to assess whether material issues of fact remain to be resolved in the proposed AA/BA alliance, but we believe it is critical that DOT avail itself of all empirical data in making its determination. Although DOT considers code-sharing agreements to be procompetitive, it has not collected sufficient data to fully analyze the long-term effects of such alliances. In our 1995 report on alliances, we found that DOT’s ability to monitor the impact of alliances was limited because foreign airlines are not required to report data from a sample of their tickets involving travel to or from the United States. In addition, U.S. carriers were not required to report traffic flying on a code-share flight. Since that report, DOT has required foreign airlines in alliances that have been granted antitrust immunity to report data on traffic to and from the United States. Even so, alliances have not been sufficiently studied to determine their long-term consequences or to allay fears that such alliances may hinder competition in the long term. The proposed AA/BA alliance has network benefits and could increase competition in markets between the United States and the European continent, the Middle East, and Africa because the number of alliances competing in these markets would increase from three to four. However, it raises serious competition issues in U.S.-U.K. markets. Competition issues arise because, under the alliance, rather than competing with each other, the two largest airlines in U.S.-U.K. markets would in essence be operating as if they were one airline. For the month of March 1998, an analysis of Official Airline Guide data indicates that AA and BA account for nearly 58 percent of the seats available on scheduled passenger flights between the United States and London. Moreover, as of March 1998, the two airlines account for 37 of the 55 total daily roundtrips (67 percent) between the United States and Heathrow offered by scheduled U.S. and British airlines. AA and BA currently compete with one another from six U.S. airports to Heathrow and from Dallas to London’s Gatwick airport. New York’s importance—Kennedy and Newark—is underscored by the fact that the market between these airports and Heathrow accounts for nearly one-fifth of all U.S.-London service and is more than three times the size of the Los Angeles-Heathrow market. At five of the seven airports where AA and BA compete—Kennedy, Chicago, Boston, Miami, and Dallas—these two airlines account for over 70 percent of the service, and at Los Angeles, they account for almost 50 percent. In addition, in Boston, AA and BA currently are the only carriers that serve Heathrow, and in the Dallas market, they are the only nonstop competitors. Figure 2 shows the location of seven cities where AA and BA currently compete with each other. Our review of current competitive conditions in the New York-Heathrow (Kennedy and Newark) market indicates that substantial new entry would need to occur to provide competition because of the (1) size of the market, (2) large share of that market currently held by AA and BA, (3) frequency of service in that market—15 flights a day—provided by the two airlines (compared with 3 daily flights by United and 3 daily flights by Virgin Atlantic), and (4) substantial portion of the market accounted for by time-sensitive business travelers. New entry could come from Delta and TWA, which have hubs at Kennedy, and from Continental from its hub at nearby Newark. In the Boston and Chicago markets, new nonstop service may offset the effect on competition caused by joining the two largest competitors in those markets. In the event of the alliance, time-sensitive business travelers in the Dallas-London and Miami-London markets will have fewer nonstop options and thus will likely pay higher fares for nonstop service. In the Dallas-London market, AA and BA are currently the only competitors providing nonstop service. In the Miami-London market, the number of nonstop competitors would fall from three to two. Several carriers told us that it is unlikely that a new U.S. competitor would attempt nonstop London service from either Miami or Dallas, since no carrier besides American maintains a large enough network from either of those airports to provide critical “feed” traffic. As a result, DOT will need to carefully examine the unique circumstances associated with these markets. At another eight U.S. cities, either BA or AA has a monopoly on nonstop service to either Heathrow (two cities) or Gatwick (six cities). In our October 1996 report on domestic competition, we found that competition was most limited and airfares highest in markets dominated by one airline. Figure 3 shows the location of eight cities where either AA or BA has a monopoly. If slots at Heathrow were made available, several U.S. carriers might serve London from their primary or secondary hubs. These slots would provide new competition to AA and BA on several routes that they currently monopolize. In particular, U.S. carriers could provide new nonstop service in the Philadelphia, Charlotte, and Pittsburgh markets. They could also provide new nonstop service from cities that are currently unserved with nonstop flights, such as Cleveland. In addition to increased nonstop competition, carriers could provide consumers with new one-stop options to compete with the alliance’s nonstop services in markets that include their primary or secondary hubs. For example, if Northwest Airlines, which is one of the largest carriers in Seattle, could serve Heathrow from its hub in Minneapolis, consumers in Seattle would have more and better connecting opportunities to Heathrow, and hence competition would be greater than it is today with BA’s being the only nonstop carrier. However, for time-sensitive travelers, these one-stop options may not be very competitive. Consumers in cities such as Des Moines or Fargo with no nonstop service to London, would experience an increase in the number of one-stop options offered by competing airlines to Heathrow. When we testified last June on the proposed alliance, representatives from six major U.S. airlines told us that they would need a total of 38 daily roundtrip slots (or 532 weekly slots) at Heathrow, along with gates and facilities, to compete with the AA/BA alliance. For this testimony, we discussed the issue of access to Heathrow with officials from each major U.S. carrier, as well as with Virgin Atlantic. This time, some were not as clear on the number of slots they would need to be competitive. The officials emphasized that gaining a sufficient number of commercially viable slots, gates, and facilities at Heathrow was critically important for them to be able to compete effectively against the alliance, and several expressed doubt that the proposed alliance could be sufficiently restructured to prevent it from being inherently anticompetitive. The carriers’ representatives expressed a range of views on the actions needed to compete effectively against the proposed alliance. For example, officials from Continental discussed the importance of flight frequency, which they argued is vital for business travelers, who represent the most valued passenger because of the revenue generated by business travel. For Continental to be able to compete in the New York-London market, where, they said, AA/BA would operate what amounts to a virtual shuttle, they argued that an additional three flights between Newark and London on top of their current schedule would not be sufficient. They believed they would need an additional six flights per day. Officials from United Airlines, which already participates in a global alliance, suggested that their alliance would compete effectively with AA/BA for many points beyond Heathrow. However, because of the importance of Heathrow, they would like to create a greater presence for their entire alliance. Thus, United officials did not indicate a desired number of slots and gates needed at Heathrow but spoke about the importance of having its STAR alliance partners (Air Canada, Thai, Varig, SAS, and Lufthansa) operate out of a single terminal at Heathrow. On the other hand, officials from Delta, which also participates in a global alliance, found the proposed AA/BA alliance to be highly anticompetitive and argued that the best way to protect the traveling and shipping public would be to disapprove the proposed alliance. Failing that, Delta officials have testified that the respective governments should guarantee that competing carriers will have unrestrained opportunities to provide service between the United States and London and receive a significant number of commercially viable slots and airport infrastructure to support those services. They suggested a minimum of 800 weekly peak-period slots would be required to provide sufficient competition at Heathrow. Virgin Atlantic officials concluded that determining the number of slots needed for a carrier to compete successfully in the U.S.-U.K. markets is difficult, but that BA would need to divest itself of a “very large” number of slots to make successful competition by another airline (besides American) a realistic possibility. As we testified last year, as a practical matter, because of a limited number of slots available at Heathrow, AA and BA would likely need to have slots transferred from them and made available to competing airlines. If the proposed alliance is approved and the regulatory agencies decide how many slots and gates should be made available, it is uncertain how long it would take the British Airports Authority, which owns and operates seven U.K. airports, including London’s Heathrow and Gatwick airports, to actually make them available to new airlines. For example, according to the British Airports Authority, it probably will not have the facilities to allow the STAR alliance to locate all of its members within the same terminal until Heathrow opens the new Terminal 5, which is not scheduled to open before the fall of 2004. If approved, the AA/BA alliance would bring a history of competitive service to London. Many other airlines that do not have a history of service to London, on the other hand, would have no such advantage. DOT will have to address this issue because it will be critical for new carriers to obtain access to commercially viable slots, as well as needed gates and facilities, at the same time as the proposed alliance begins joint operations. Some have suggested that AA and BA “phase in” their alliance over time, in part to give other carriers the time needed to establish themselves. If this happened, new airlines’ operations should be phased in to coincide with the alliance. According to airline officials, aviation experts, and consumer groups we interviewed, restrictions on access to slots and gates at Heathrow Airport are the most significant barriers to competition in U.S.-U.K. markets, but sales and marketing practices—which include frequent flier programs, travel agent commission overrides, multiple listings on computer reservation systems, and corporate incentive programs—may also reduce competition. They do so by reinforcing market dominance at hubs and impeding successful entry by new carriers and existing carriers into new markets, which can lead to higher fares. However, measuring the impact of these practices on fares is difficult, and limiting them would involve a trade-off between their anticompetitive effect and the consumer benefits that some of them bring. In October 1996, we reported that sales and marketing strategies, when used by incumbent airlines in U.S. domestic markets, make it difficult for nonincumbents to enter markets dominated by an established airline. The strength of these programs depends largely on an airline’s route networks, alliance memberships, and hubs. If an airline is already dominant in a given airport, these programs will serve to reinforce this dominance. In particular: Travel agent commission overrides encourage travel agencies to book travelers on one airline over another on the basis of factors other than price. Frequent flier programs encourage travelers to chose one airline over another on the basis of factors other than price. Corporate fare agreements make it more difficult for point-to-point carriers to compete for corporate business. Bias in the computer reservation systems, in which multiple listings of a single flight offered by an alliance partner crowd the first few screens in U.S. systems, makes the booking of an alliance flight more likely. In our October report, we noted that travel agent commission overrides and frequent flier programs are targeted at business fliers and encourage them to use the dominant carrier in each market. Because business travelers represent the most profitable segment of the industry, airlines in many cases have chosen not to enter, or quickly exit, domestic markets where they did not believe they could overcome the combined effect of these strategies and attract a sufficient amount of business traffic. AA, which is credited with having first created frequent flier programs in 1981, is reputed to have the largest frequent flier program in the world, with more than 30 million members. Continental has more than 15 million members. European airlines, on the other hand, tend to have much smaller frequent flier memberships. BA’s program, for example, has approximately 1 million members. The difference in memberships compared with U.S. carriers is due to their relative newness among European carriers and U.S. programs’ tending to allow members to accumulate miles for activities other than flying (e.g., through car rentals or stays at hotels), while European carriers’ programs are more restrictive in scope. Some airline officials we interviewed expressed concern that the scope of AA’s and BA’s combined route network and flight frequency, in combination with sales and marketing practices, would effectively preclude competition by other carriers in the U.S.-U.K. markets, especially at BA-dominated Heathrow. These carriers argued that the alliance would be able to exercise such market power, especially in relation to travel agents and corporate fare products, that other carriers would not be able to attract key business traffic. Officials from Continental Airlines told us that the problem with the sales and marketing practices of the combined AA/BA alliance would be their effect on enhancing AA/BA’s dominance of market share. They said that rather than restrict AA/BA in combining their frequent flier programs, travel agent commission overrides, corporate incentive agreements, and computer reservation system practices, DOT should not grant antitrust immunity to AA/BA. TWA officials also said that these sales and marketing practices are anticompetitive and their use by the proposed alliance should be restricted. Officials from Virgin Atlantic, noting the strength and market dominance of AA and BA, questioned whether any mitigating conditions would be sufficient to limit the competitive advantage the two airlines would have if joined in a code-sharing partnership. However, United, Delta, and Northwest—each of which participates in its own global code-sharing alliance—generally disagreed that any of these sales and marketing practices represented significant barriers to their ability to compete. United told us that its alliance would compete with any other both in terms of their networks and their various sales and marketing practices. US Airways also indicated that it was not concerned with sales and marketing practices, as long as it had access to sufficient Heathrow slots and gates. Outside experts on airline competition had varying opinions on the degree to which sales and marketing practices stifle competition. While none had done research specifically on how these practices affect international air transport markets, some said frequent flier programs do not raise entry barriers for large worldwide carriers because they all have relatively strong frequent flier programs and extensive route networks. However, point-to-point carriers may be at an additional disadvantage when competing against carriers with both large route networks and strong frequent flier programs. For example, while AA and BA are perceived to have considerable advantages in their frequent flier programs compared with other nonallied or point-to-point airlines, the differences are relatively minor when compared with other U.S.-European alliances. Even so, these experts said it is almost impossible to measure the degree to which sales and marketing practices impede competition. We were unable to obtain any data on these sales and marketing practices. The airlines are not required by law to report this information to DOT, and GAO has no right of access to commercially owned data. However, we know of at least two lawsuits alleging that BA has engaged in certain sales and marketing practices that are anticompetitive in nature. However, because these actions have not yet entered the trial phase, we have been unable to obtain detailed information on the alleged economic damage stemming from BA’s practices, or BA’s evidence to the contrary. In past alliances, DOT has not restricted partner airlines in their use of frequent flier programs, travel agent commission overrides, or corporate fare packages. It has, in some of the alliances, withheld antitrust immunity from the airlines’ coordination of the management of their financial interests in computer reservation system companies. While restrictions on other sales and marketing practices would be unprecedented, the European Commission, as noted earlier, is considering whether to address sales and marketing practices with all alliances. DOT and some U.S. carriers are concerned that the European Commission would so broadly regulate the industry’s practices. The outside experts we interviewed concurred that restrictions on sales and marketing practices in alliances should not be imposed. They believed that any restrictions on the pooling of frequent flier programs, for example, would reduce the benefits that accrue to travelers while doing nothing to address the underlying issue of market dominance. Moreover, they said it would be difficult to limit alliance members’ use of these marketing practices without eliminating them altogether; banning them involves a trade-off between their anticompetitive effect and the consumer benefits that some of them bring. In summary, Mr. Chairman, as a result of the challenges in addressing the barriers to entry at Heathrow, significant intergovernmental agreement will be needed well beyond the scope of prior open skies agreements. If the U.S. government is successful in obtaining an open skies agreement with the United Kingdom, and that agreement provides for sufficient access to Heathrow, significant new entry in the U.S.-U.K. markets would likely provide substantial benefits for consumers in both countries in terms of lower fares and better service. However, because these markets have been heavily regulated for 2 decades, the incumbent airlines enjoy a competitive advantage over new carriers in the U.S.-London markets. Because of AA’s and BA’s dominance at certain airports and extensive networks, that advantage may be further strengthened by sales and marketing practices. Thus, it will be important that new competitors are able to initiate their service no later than the time at which the AA/BA alliance becomes operational. How much access would be needed for other airlines to effectively compete, and what other conditions should be imposed on the alliance can only be determined after careful analysis of the facts to ensure that over the long run, consumers benefit. While we recognize that ultimately, decisions on all conditions must inevitably reflect numerous policy judgments, public policy should be based on significant quantitative analysis of the factors at issue, rather than anecdotal evidence. At least four governmental bodies—DOT, Justice, the European Commission, and the U.K. Department of Trade and Industry—have the ability to get the data needed for such analyses. Only then can the public be assured that such important international policy is grounded on a sound basis and that consumers benefit, both in the short and long term. Mr. Chairman, this concludes my prepared statement. Our work was conducted in accordance with generally accepted government auditing standards. We would be pleased to respond to any questions that you or any Member of the Subcommittee may have. International Aviation: Competition Issues in the U.S.-U.K. Market (GAO/T-RCED-97-103, June 4, 1997). International Aviation: DOT’s Efforts to Promote U.S. Air Cargo Interests (GAO/RCED-97-13, Oct. 18, 1996). Airline Deregulation: Barriers to Entry Continue to Limit Competition in Several Key Domestic Markets (GAO/RCED-97-4, Oct. 18, 1996). International Aviation: DOT’s Efforts to Increase U.S. Airlines’ Access to International Markets (GAO/T-RCED-96-32, Mar. 14, 1996). International Aviation: Better Data on Code-Sharing Needed by DOT for Monitoring and Decisionmaking (GAO/T-RCED-95-170, May 24, 1995). International Aviation: Airline Alliances Produce Benefits, but Effect on Competition Is Uncertain (GAO/RCED-95-99, Apr. 6, 1995). International Aviation: DOT Needs More Information to Address U.S. Airlines’ Problems in Doing Business Abroad (GAO/RCED-95-24, Nov. 29, 1994). International Aviation: New Competitive Conditions Require Changes in DOT Strategy (GAO/T-RCED-94-194, May 5, 1994). International Aviation: Measures by European Community Could Limit U.S. Airlines’ Ability to Compete Abroad (GAO/RCED-93-64, Apr. 26, 1993). Airline Competition: Impact of Changing Foreign Investment and Control Limits on U.S. Airlines (GAO/RCED-93-7, Dec. 9, 1992). Airline Competition: Effects of Airline Market Concentration and Barriers to Entry on Airfares (GAO/RCED-91-101, Apr. 26, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO discussed the United States' aviation relations with the United Kingdom, focusing on the: (1) status of the various reviews of the proposed American Airlines/British Airways (AA/BA) alliance being undertaken by the European regulatory agencies and the Departments of Transportation and Justice; (2) competitive impact of the proposed alliance; and (3) extent to which sales and marketing practices of American Airlines and British Airways should be considered in reviewing the alliance. GAO noted that: (1) European regulatory agencies have nearly completed their reviews of the proposed AA/BA alliance; (2) they are considering a range of issues that would have to be addressed as a condition of approving the alliance, including the number of slots and gates that other airlines would need at London's Heathrow Airport to compete, as well as American Airlines' and British Airways' marketing practices; (3) the United Kingdom, which is also reviewing the proposed alliance, is waiting for the European Commission to announce its draft remedies; (4) in contrast, the Department of Transportation (DOT) has not yet begun its formal review of the proposed alliance because neither airline has filed all the documentation requested; (5) DOT has reiterated that it will not approve the alliance until the United States successfully negotiates an open skies agreement with the United Kingdom; (6) the proposed AA/BA alliance raises significant competition issues; (7) currently, the two airlines account for nearly 58 percent of the available seats on scheduled U.S. and British airlines between the U.S. and London; (8) in addition, they provide over 70 percent--and in some cases all-of the available seats on scheduled U.S. and British airlines between Heathrow Airport and several key U.S. airports, including Chicago, Boston, and Miami; (9) as a result of this level of market concentration, DOT's approval of the alliance would further reduce competition unless, as a condition of approval, other U.S. airlines were able to obtain adequate access to Heathrow; (10) although slots, gates, and facilities are most important, most experts and some airline officials with whom GAO spoke also recognize that American Airlines' and British Airways' sales and marketing practices may make competitive entry more difficult for other airlines; (11) practices such as frequent flier programs and travel agent commission overrides encourage travelers to choose one airline over another on the basis of factors other than obtaining the best fare; (12) such practices may be most important if an airline is already dominant in a given market or markets; (13) ultimately, this may lead to higher fares than would exist in the absence of these marketing practices; (14) even so, the experts agreed that measuring the effect of these practices is nearly impossible; and (15) mitigating their effect without banning them is difficult, and banning them involves a trade-off between their anticompetitive effect and the consumer benefits that some of them bring.
|
Multiemployer defined benefit (DB) pension plans are created by collective bargaining agreements between labor unions and two or more employers, and generally operate under the joint trusteeship of unions and employers. Such plans typically exist in industries with many small employers who may be unable to support an individual DB plan, or where seasonal or irregular employment results in high labor mobility between employers. Industries where multiemployer plans are prevalent include trucking, construction, retail, and mining and manufacturing. Like single- employer DB plans, multiemployer DB plans pay retirees a defined benefit after retirement. Under the Employee Retirement Income Security Act of 1974 (ERISA), as amended, the benefits of multiemployer plans are insured by PBGC. As shown in table 1, PBGC’s multiemployer fund is financed by insurance premiums paid by plans, with each multiemployer plan paying an annual premium of $12 per participant to PBGC as of 2013. In return, PBGC provides financial assistance in the form of loans to plans that become insolvent, that is, plans that do not have sufficient assets to pay pension benefits at the PBGC guaranteed level for a full plan year. Although such financial assistance is referred to as a “loan,” and is by law required to be repaid, in practice such loans have almost never been repaid, as plans generally do not emerge from insolvency. Before PBGC will provide the loans, participants’ retirement benefits must be reduced to a level specified in law. Even after insolvency, the plan remains an independent entity managed by its board of trustees. This contrasts with the agency’s single-employer program under which PBGC does not provide assistance to ongoing plans, but instead takes over terminated underfunded plans as a trustee, and pays benefits directly to participants. Congress included provisions directed at imposing greater financial discipline on multiemployer plans in the Pension Protection Act of 2006 (PPA). Specifically, as outlined in table 2, this law includes new provisions designed to compel multiemployer plans in poor financial shape to take action to improve their financial condition over the long term. The law established two categories of troubled plans—endangered status (commonly referred to as “yellow zone,” and which includes an additional subcategory of “seriously endangered”) and a more seriously troubled critical status (commonly referred to as “red zone”). PPA further requires plans in these categories to develop strategies that include contribution increases, benefit reductions, or both, designed to improve their financial condition in coming years. Multiemployer plans in endangered status are to document these strategies in a funding improvement plan, and multiemployer plans in critical status plans are to do so in a rehabilitation plan. The plan trustees can offer the bargaining parties multiple schedules from which to choose, but one of these must be designated as the “default schedule,” which is to be imposed if the bargaining parties do not select one of the schedules within a specified timeframe. Once plan trustees have adopted a funding improvement or rehabilitation plan, bargaining parties are to select one of the available benefit and/or contribution schedules through the collective bargaining process. The multiemployer plan is then required to report on progress made in implementing its funding improvement or rehabilitation plan. Because of the greater severity of critical status plans’ funding condition, such plans have an important exception to ERISA’s anti-cutback rule in that they may reduce or eliminate certain so-called “adjustable benefits” such as early retirement benefits or subsidies, certain post-retirement death benefits, and disability benefits for plan participants who have not yet retired. For example, if a critical status plan were to adopt a rehabilitation plan that proposed to eliminate an early retirement benefit, appropriate notice was provided, and the reduction agreed to in collective bargaining, then participants not yet retired would no longer be able to receive that early retirement benefit. PPA funding requirements took effect in 2008 just as the nation was entering a severe economic crisis. The dramatic decline in the value of stocks and other financial assets in 2008 and the accompanying recession broadly weakened multiemployer plans’ financial health. In response, Congress enacted the Worker, Retiree, and Employer Recovery Act of 2008 (WRERA) which contained provisions designed to help pension plans and participants by providing funding relief to help them navigate the difficult economic environment. For example, WRERA relief measures allowed multiemployer plans to temporarily freeze their funding status at the prior year’s level, and extend the timeframe for plans’ funding improvement or rehabilitation plans from 10 to 13 years. In addition, Congress enacted the Preservation of Access to Care for Medicare Beneficiaries and Pension Relief Act of 2010 (PRA), which provides additional funding relief measures for multiemployer plans as long as a plan meets certain solvency requirements. Generally, PRA allows a plan to amortize the investment losses from the 2008 market collapse over 29 years rather than 15 years, and to recognize such losses in the actuarial value of assets over 10 years instead of 5 years, so that the negative effects of the market decline on asset values are spread out over a longer period. Overall, since 2009, multiemployer plans have experienced improvements in funding status, but a sizeable portion of plans are still critical or endangered. According to plan-reported data—current through 2011—from the IRS (see fig. 1), while the funding status of plans has not returned to 2008 levels, the percentage of plans in critical status declined from 34 percent in 2009 to 24 percent in 2011. Similarly, the percentage of plans in endangered status also declined, and to a greater extent, from 34 percent in 2009 to 16 percent in 2011. However, based on the 2011 data from the IRS, despite these improvements, 40 percent of plans still have not emerged from critical or endangered status. The large majority of the most severely underfunded multiemployer plans—those in critical status—have, according to a 2011 survey, both increased required employer contributions and reduced participant benefits in an effort to improve plans’ financial positions. Plan officials explained that these changes have had or are expected to have a range of effects, and in some cases may severely affect employers and participants. While most critical status plans expect to recover from their current funding difficulties, about 25 percent do not and instead seek to delay eventual insolvency. A 2011 survey of 107 critical status multiemployer plans conducted by the Segal Company shows that the large majority developed rehabilitation plans that included a combination of both contribution increases and benefit reductions to be implemented in the coming years. Further, plans proposed to take these measures regardless of whether the bargaining parties adopt the preferred schedule or the default schedule. As figure 2 illustrates, of the preferred schedules of 107 critical plans surveyed, 81 included both contribution increases and benefits cuts, while 14 proposed contribution increases only, and 7 included benefit reductions only. Most default schedules also include both increased contributions and reduced benefits, but compared to the preferred schedules, a much larger percentage chose to reduce benefits only. The reason for this difference is not clear, but Segal Company officials noted that because prompt adoption of an acceptable schedule is desirable, some plans may take special steps to make the default plan especially unappealing. Most plans—95 out of 107—developed preferred schedules that called for contribution increases and, while the range of these increases varied widely among plans, some were quite high. As figure 3 shows, most plans proposed increases of 10 percent or more in the first year of the collective bargaining agreement, and a little over a quarter of plans proposed increases of 20 percent or more. The median first-year contribution increase was 12.5 percent. Overall, the range of first-year increases was quite broad however, ranging from less than 1 percent to 225 percent. These data tell only a partial story, however, because rehabilitation plans may mandate a series of contribution increases in subsequent years. Of the eight critical status plans we contacted, the rehabilitation plans of seven increased contribution rates, and six of these specified a series of contribution increases over subsequent years. For example, one plan proposed contribution increases of 10 percent compounded annually over 10 years, so that at the end of this period, a contribution rate of $2.00 per hour, for example, would have been increased to $5.25 per hour, or by 162 percent. Thirty-two plans developed rehabilitation plans that reduced the rate of future benefit accruals. As figure 4 illustrates, 15 of these plans reduced future benefit accruals by 40 percent or more, and another 12 plans reduced future benefit accruals 20-40 percent. The median reduction for all 32 plans was 38 percent. As with contribution increases, the survey data on reductions to benefit accrual rates paint only a partial picture. Reductions in the benefit accrual rate are more common among troubled multiemployer plans than these data show because such reductions were often made prior to the rehabilitation plan. For example, findings from the Segal Company’s survey show that, of the plans that expected to exit critical status within the specified timeframes, about one-third had cut future accrual rates before preparation of the rehabilitation plan, either directly or by a plan amendment that excluded recent contribution increases from the benefit formula. Also, a large majority of plans—88 out of 107—reduced one or more types of the adjustable benefits as outlined by the PPA. Typically, these reductions applied to both vested but inactive and active participants, but some plans applied them to only one or the other. Officials of seven of the eight critical status plans we contacted increased contributions rates, and several of these plans indicated that contribution increases could be absorbed without undue stress to the plan. For example, one plan representing maintenance workers proposed to increase the weekly employer contribution rate for each worker from $82.75 per employee in 2011 to $130.75 per employee in 2023, a 58 percent increase over 12 years. While this makes some significant demands on employers, they are nonetheless in agreement, and the reaction of both employers and participants to the rehabilitation plan has been constructive. Similarly, officials of another plan covering sheet metal workers said that the annual contribution increases ranging from 30 percent in 2009-2010 to 5.8 percent in 2015-2016 can be absorbed by plan employers without great difficulty. In contrast, officials of some plans and contributing employers we contacted said that contribution increases would have very severe negative effects on some employers and possibly the plan itself. For example, officials of one plan told us that a proposed series of annual increases of 10 percent (compounded) represents a significant increase in labor costs. Plan officials said contributing employers are competing against firms outside of the plan that do not have comparable pension or health insurance costs, and contribution increases put them at a competitive disadvantage. Similarly, an official of a long-distance trucking firm said that the high contribution rates of underfunded multiemployer plans have greatly affected this firm’s cost structure and damaged its competitive position in the industry. In other cases, plans may have been unable to increase contributions as much as necessary. For example, our review of one plan’s rehabilitation plan revealed that the 15 percent contribution increase resulted from a difficult balance between, among other factors, adequately funding the plan and avoiding excessive strain on employers. According to the plan administrator, plan trustees determined that many contributing employers were in financial distress and that a significant increase in contributions would likely lead to business failures or numerous withdrawals. After the rehabilitation plan was adopted, five employers withdrew from the plan. Contribution increases could have a significant impact on participating workers as well as employers because in some cases at least a portion of the increases will be funded through reductions in pay or other benefits. For example, officials of one large national plan with hundreds of contributing employers in a variety of industries told us that employers will pass a substantial part of the higher contributions to employees in the form of lower wages. They noted that workers’ wages have been stagnant for 10 years, so the need to return to full funding so quickly in accordance with the Pension Protection Act of 2006 (PPA) requirements is hurting workers in the short term. More broadly, a recent report developed by a construction industry consortium notes that higher contributions make less money available for wage increases and other benefits. The report further notes that in some cases the additional contribution comes directly from the existing wage package, so a worker’s take home pay may remain stagnant or even be reduced. In other cases, the contribution increases will not have an immediate impact on participants’ pay, but will affect other portions of their benefit package. For example, one plan opted to increase pension contributions by diverting 2 percent of employers’ contributions from another benefit account. An official of another plan explained that the plan funded increased pension contributions by, for example, reducing contributions to a health benefit plan. Instead of directly reducing current wages, these actions will likely lead to higher health care costs or reduced benefits for employees. Among plans we contacted that had reduced future benefit accruals in recent years, the cumulative impact varied. For example, officials of one plan covering sheet metal workers explained that since 2003 the plan had reduced future benefit accruals by 75 percent per each dollar contributed to the plan. Another plan covering mine industry workers completely eliminated future benefit accruals for new, inexperienced miners hired on or after January 1, 2012, even though a contribution of $5.50 per hour of work will be made on their behalf. Another plan made no changes to benefit accrual rates but made a series of changes to eligibility and thresholds for retirement credits, with the result that some employees will have to work longer to accrue the same benefit they would have before adoption of the rehabilitation plan. The reduction or elimination of adjustable benefits, such as those outlined in table 3, were also significant and controversial in some cases. Officials of several of the plans that we contacted told us that the reduction or elimination of early retirement benefits for participants working in physically demanding occupations would be particularly difficult for some workers. As one official explained, working longer can be a grim scenario for older workers who have a hard time bearing the physical demands of labor, such as in a paper mill, for example. At the same time, some plans also eliminated or imposed limitations on disability retirement, so that, as officials of one plan noted, even workers who have developed physical limitations will have to either continue to work, or retire on substantially reduced benefits. Representatives of one plan said that there was considerable resistance from workers to the cuts in early retirement benefits. The officials explained, however, that these benefits had been established in the early 1990s when the plan was very well funded and that these promises had to be withdrawn in light of the plan’s current poor financial picture. Benefit reductions can affect employers as well as plan participants. For example, representatives of one construction industry plan told us that the reduced benefits outlined in the rehabilitation plan had reduced their ability to recruit and train new apprentices. These representatives explained that the prospect of earning only $50 of monthly retirement benefit per year of work—which after a 30-year career would result in only $1,500 payment per month in retirement—is not very appealing to prospective employees. While this does present a barrier to recruitment, a plan representative told us it is mitigated by an attractive hourly wage of $31.40, and the fact that many of the younger workers today are thankful for a paycheck in the current economic environment. Some rehabilitation plans also included provisions designed to protect the plan from employer withdrawals. For example, as table 4 outlines, two of the eight critical status plans we contacted impose much more severe benefit reductions on employees of firms that subsequently choose to withdraw from the plan. According to one of the rehabilitation plans, maintaining the contribution base of the pension plan is essential to the success of the rehabilitation plans and hence for plan participants and their families. Officials of this pension plan said that the pension plan cannot survive if it continues to lose contributing employers, and penalizing their employees is one way of discouraging withdrawals. The Segal survey of critical status plans indicates that while most plans aimed to eventually emerge from critical status, a significant number reported that they do not and instead project eventual insolvency. As figure 5 illustrates, of the 107 plans surveyed, about 67 expect to emerge from critical status within the statutory time frames of either 10 to 13 years, and 12 others in an extended rehabilitation period. However, 28 of the surveyed plans had determined, as the authors of the survey noted, that no realistic combination of contribution increases and benefit reductions would enable them to emerge from critical status, and that their best approach is to forestall insolvency for as long as possible. Among these plans, the average number of years to expected insolvency was 12, with some expecting insolvency in less than 5 years and others not for more than 30 years. The majority of these plans expected insolvency in 15 or fewer years. Among the plans we contacted, four expected to eventually become insolvent. In general, officials of these plans told us that a combination of massive investment losses and deterioration in contribution bases were primary causes of their financial difficulties. For example, officials of one plan cited the closure of paper mills from which the plan previously derived a substantial share of contributions as a cause of the plan’s financial distress. Officials of these plans explained that their analyses concluded that no feasible combination of contribution increases or benefit reductions could lead them back to a healthy level of funding. Several officials indicated that an effort to do so would likely accelerate the demise of the plan. For example, our review of plan documents revealed that the actuary of one fund determined that mathematically the fund would be able to emerge from critical status if contribution rates were increased by 24 percent annually for each of the next 10 years, ultimately increasing to a rate that would be about 859 percent of the then-current contribution rate. The trustees of this plan determined that such a proposal would be rejected by representatives of employers and workers, and would likely lead to negotiated withdrawals by plan employers. This, in turn, could result in insolvency of the plan, possibly as early as 2019. Instead, this plan opted for measures that officials believed are most likely to result in continued participation in the fund, yet which nonetheless are projected to forestall insolvency until about 2029. Similarly, according to officials of another plan, plan trustees concluded that the significant contribution increases necessary to avoid insolvency were more than employers in that geographic area could bear. In addition, the plan considered the impact of funding the necessary contribution increases through reductions to base pay. The plan determined that this also would not be feasible because of the rising cost of living facing these employees and their families. Consequently, the plan trustees adopted a rehabilitation plan that forestalls insolvency until about 2025. Officials of plans that we contacted expressed a number of concerns about the future, including concerns about financial market returns, the overall economy, and the stability of contributing employers. For example, officials of one plan that expected to emerge from critical status within the next 10 years said that this could be impeded if investment returns were below expectations, and especially if another collapse in the financial markets occurs. Officials of the seven other critical status plans we contacted echoed this concern, and several mentioned that overall economic conditions affect hours worked and hence overall contributions. For example, officials of a plan covering construction industry workers expressed concerns that because of the economic downturn, the reduction in demand for infrastructure and construction maintenance work has greatly reduced the number of active workers in the plan. Finally, officials of several plans expressed concerns about attracting and retaining contributing employers. An official of a safe status or “green- zone” plan, for example, said that it is essential that the plan continue to attract new employers and that the ability to do so is a key basis for the plan’s overall financial health. An official of a critical status plan that is attempting to forestall insolvency told us that it is very concerned about the financial well-being of its remaining contributing employers and that plan insolvency could be hastened if one of these employers were to fail or otherwise cease making contributions. As PBGC officials and a construction industry organization noted, because the contribution base of multiemployer plans can overlap, financial stress in one plan has the potential to spill over to other plans. If, for example, the burden of increased contributions in one plan causes a large employer economic distress, it may impair its ability to remain competitive as well as make sufficient contributions to other plans. As shown in figure 6, this contagion effect could negatively affect the funded status of other plans. If the events of coming years are more favorable than the assumptions on which rehabilitation plans are based, some plans may emerge from critical status earlier than planned, and some may be able to avoid insolvency. However, the opposite is true as well—if future events are less favorable than assumed, contributing employers and plan participants may have to make additional sacrifices or additional plans could face insolvency. Our discussions with eight critical status and two endangered status plans show that while some plans believed they had flexibility to make further adjustments, others did not. For example, officials of one plan trying to avoid insolvency said that even the contribution increases included in the funding improvement plan will be very difficult to bear for employers and workers, and further concessions are not realistic. An official of a large national plan said that the ability of employers and participants to absorb more sacrifices varied considerably among the plan’s 900 participating groups, but that in general, additional concessions would be very difficult to accept. They said that it would almost certainly erode the plan’s contribution base, which would mean a slow progression towards insolvency. PBGC’s financial assistance to multiemployer plans has increased significantly in recent years, and projected plan insolvencies may exhaust PBGC’s multiemployer insurance fund. In fact, PBGC expects that, under current law, based on plans currently booked as liabilities (current and future probable plan insolvencies), the multiemployer insurance program is likely to become insolvent within the next 10 to 15 years, although the exact timing is uncertain and depends on key factors, such as investment returns and the timing of individual plan insolvencies. Additionally, PBGC estimates that if the projected insolvencies of either of two large multiemployer plans were to occur, the insurance fund would be completely exhausted within 2 to 3 years. While retirees of insolvent plans generally receive reduced monthly pension payments under the PBGC pension guarantee, this amount would be further reduced to an extremely small fraction of what PBGC guarantees, or nothing, if the multiemployer insurance fund were to be exhausted. As more multiemployer plans have become insolvent, the total amount of financial assistance PBGC has provided has increased markedly in recent years. Overall, for fiscal year 2012, PBGC provided $95 million in total financial assistance to help 49 insolvent plans cover pension benefits for about 51,000 plan participants. Generally, since 2001, the number of multiemployer plans needing financial assistance has steadily increased, as has the total amount of assistance PBGC has provided each year, slowing the increase in PBGC’s multiemployer insurance program funds. Moreover, as figure 7 indicates, the number of plans needing PBGC’s help has increased significantly in recent years, from 33 plans in fiscal year 2006 to 49 plans in fiscal year 2012. Likewise, the amount of annual PBGC assistance to plans has increased from about $70.1 million in fiscal year 2006 to about $95 million in fiscal year 2012 (a decrease in assistance, due to fewer plan closeouts, compared with about $115 million in fiscal year 2011). From fiscal years 2005 to 2006 alone, annual PBGC assistance increased from about $13.8 million to more than $70 million. Loans to insolvent plans comprise the majority of financial assistance that PBGC has provided to multiemployer plans. As figure 8 illustrates, based on available data from fiscal year 2011, loans to insolvent plans totaled $85.5 million and accounted for nearly 75 percent of total PBGC financial assistance. However, the loans are not likely to be repaid because the plans are insolvent. To date, only one plan has ever repaid a PBGC loan. In addition to providing loans to insolvent plans, PBGC provided $13.7 million in fiscal year 2011 to help support two plan partitions, which enabled those plans to carve out the benefit liabilities attributable to “orphaned” employees whose employers filed for bankruptcy, while keeping the remainders of the plans in operation. Once a plan is partitioned, PBGC assumes the liability for paying benefits to the orphaned participants. Additionally, PBGC provided $15.1 million in fiscal year 2011 to help plan sponsors close out five plans, which occurs when plans either merge with other multiemployer plans or purchase annuities from private-sector insurers for their beneficiaries. Plans considering a merger must provide notice to PBGC and may request a compliance determination; PBGC officials said they carefully consider each merger to ensure that the merger would not result in a weaker combined plan than the separately constituted plans. PBGC monitors the financial condition of multiemployer plans to identify plans that are at risk of becoming insolvent and that may require its financial assistance from the multiemployer insurance program. Based on this monitoring, PBGC maintains a contingency list of plans that are likely to become insolvent and make a claim to PBGC’s multiemployer insurance program. PBGC classifies plans on its contingency list according to the plans’ risk of insolvency. PBGC also assesses the effect that insolvencies among the plans on the contingency list would have on the multiemployer insurance fund. Table 5 outlines the various classifications and definitions based on risk. Both the number of multiemployer plans placed on PBGC’s contingency list and the amount of PBGC’s potential financial assistance obligations to those plans have increased steadily over time, with the greatest increases recorded in recent years. According to PBGC data, the number of plans where insolvency is classified as “probable”—plans that are already insolvent or are projected to become insolvent within 10 years—increased from 90 plans in fiscal year 2008 to 148 plans in fiscal year 2012. Similarly, the number of plans where insolvency is classified as “reasonably possible”—plans that are projected to become insolvent 10 to 20 years in the future—increased from 1 in fiscal year 2008 to 13 in fiscal year 2012. Although the increase in the number of multiemployer plans on PBGC’s contingency list has risen sharply, the present value of PBGC’s potential liability to those plans has increased by an even greater factor. For example, as illustrated in figure 9, the present value of PBGC’s liability associated with “probable” plans increased from $1.8 billion in fiscal year 2008 to $7.0 billion in fiscal year 2012. By contrast, for fiscal year 2012, PBGC’s multiemployer insurance fund only had $1.8 billion in total assets, resulting in net liability of $5.2 billion, as reported in PBGC’s 2012 annual report. Although PBGC’s cash flow is currently positive—because premiums and investment returns on multiemployer insurance fund assets exceed benefit payments and other assistance—PBGC expects plan insolvencies to more than double by 2017, placing greater demands on the multiemployer insurance fund and further weakening PBGC’s overall financial position. PBGC expects that the pension liabilities associated with current and future plan insolvencies will exhaust the multiemployer insurance fund. Under one projection using conservative (i.e., somewhat pessimistic) assumptions for budgeting purposes, PBGC officials reported that the agency’s projected financial assistance payments for plan insolvencies that have already occurred or are considered probable in the next 10 years would exhaust the multiemployer insurance fund in or about 2023. PBGC officials said that the precise timing of program insolvency is difficult to predict due to uncertainty about key assumptions, such as investment returns and the timing of individual plan insolvencies. Based on a range of estimates provided by multiple projections, PBGC officials said the multiemployer insurance program is likely to become insolvent within the next 10 to 15 years. Furthermore, exhaustion of the insurance fund may occur sooner because the financial health of two large multiemployer plans has deteriorated. According to PBGC officials, the two large plans for which insolvency is “reasonably possible,” have projected insolvency 10 to 20 years in the future. PBGC estimates that, for fiscal year 2012, the liability from these two plans accounted for $26 billion of the $27 billion in liability of plans in the “reasonably possible” category. Taken in combination, the number of retirees and beneficiaries of these two plans would represent about a six-fold increase in the number of people receiving guarantee payments in 2012. PBGC officials said that the insolvency of either of these two large plans would exhaust the insurance fund in 2 to 3 years. Generally, retirees who are participants in insolvent plans receive reduced pension benefits under PBGC’s statutory pension guarantee formula. In most cases, PBGC’s pension guarantee (see fig. 10) does not offer full coverage of the monthly pension benefits that a retiree of an insolvent plan has actually earned. When a multiemployer plan becomes insolvent and relies on PBGC loans to pay for benefit payments to plan retirees, retirees will most likely see a reduction in their monthly pension benefits. PBGC uses a formula that calculates the maximum PBGC benefit guarantee based on the amount of a plan participant’s pension benefit accrual rate and years of credit service earned. For example, if a retiree has earned 30 years of credit service, the maximum coverage under the PBGC guarantee is about $1,073 per month, which yields an annual pension benefit of $12,870. Generally, retirees receiving the highest pensions experience the steepest cuts when their plans become insolvent and their benefits are limited by the pension guarantees. According to PBGC, in 2009, the average monthly pension benefit received by retirees in all multiemployer plans was $821. However, as shown by PBGC in a hypothetical illustration of benefit distributions (see fig. 11), the line that spans the bar chart indicates that the range of pension benefits varies widely across retirees, and, with $692 as the median pension, about half of the plan’s retirees will experience 15 percent or greater reductions in their pensions under the PBGC guarantee. Additionally, under this illustration, one out of five retirees will experience 50 percent or greater reductions in their pensions under the PBGC guarantee. Ultimately, regardless of how long a retiree has worked and the amount of monthly benefits earned, any reduction in pension benefits—no matter the amount—may have significant effects on retirees’ living standards. According to PBGC, in the event that the multiemployer insurance fund is exhausted, affected participants then relying on the PBGC pension guarantee would receive an extremely small fraction of their already- reduced guarantees or, potentially, nothing. According to PBGC officials, once the insurance fund’s cash balance is depleted, the agency would have to rely solely on the annual insurance premium receipts, which totaled $92 million for fiscal year 2012. The precise effect that the insolvency of the multiemployer insurance fund would have on retirees receiving the PBGC guaranteed benefit depends on a number of factors—primarily the number of guaranteed benefit recipients and PBGC’s annual premium income at that time. The impact would, however, likely be severe. For example, if the insurance fund were to be drained by the insolvency of one very large and troubled plan, under one scenario, we estimate that the benefits paid by PBGC would be reduced to less than 10 percent of the PBGC guarantee level. In this scenario, a retiree who once received a monthly pension of $2,000 and whose pension was reduced to $1,251 under the PBGC guarantee, would see the monthly pension income further reduced to less than $125, or less than $1,500 per year. Additional plan insolvencies would further depress already drastically reduced income levels. Our contacts with plan officials and other stakeholders also suggested that the exhaustion of the PBGC multiemployer insurance fund would have effects well beyond direct financial impacts. For example, officials of another plan said that the exhaustion of the insurance fund could bring about the loss of public confidence in the multiemployer plan system’s ability to provide retirement security for plan participants and their beneficiaries. Experts and stakeholders we interviewed cited two key policy options to avoid the insolvencies of severely underfunded plans and the PBGC multiemployer insurance fund, and a number of other options for longer term reform of the multiemployer system (see fig. 12). To address the impending insolvency crisis, they proposed allowing severely troubled plans to reduce accrued benefits, including benefits of retirees, and providing PBGC with additional resources to prevent insolvencies that might otherwise threaten the fund. Longer term options would provide plans with flexibilities and resources to help attain financial stability in the future. These include encouraging the adoption of flexible benefit designs and reforming withdrawal liability policies. Various experts and plan representatives stressed the necessity of modifying ERISA’s anti-cutback rule to allow severely distressed plans to reduce the accrued benefits of active participants as well as retirees. They noted that this flexibility is essential because 1) the most severely distressed plans will be unable to avoid insolvency using traditional methods—increasing employer contributions and/or reducing future benefit accruals or adjustable benefits—and 2) benefit reductions will occur in any case and will be more severe in the event of plan insolvency, especially in the event of the insolvency of PBGC’s multiemployer insurance fund. As described in the first section of this report, the most severely distressed plans we contacted have already adjusted contributions and benefits and several stated that further adjustments would accelerate plan insolvency. In particular, the demographics of many multiemployer plans limit their ability to reduce liabilities through contribution increases or reductions in future benefit accruals because they are typically based on hours worked. For example, the majority of participants in one of the largest multiemployer plans have already retired or are inactive and no longer contributing to the plan—as of 2012, the plan had about 4.86 retired or otherwise inactive participants for every active worker. In light of the sacrifices already made by active participants—some of whom are absorbing the cost of significant contributions to support benefit payments at a level they will likely never see for themselves—some stakeholders noted that adjustments of retiree benefits would be equitable. Moreover, experts, as well as employer and plan representatives also noted that allowing plans to reduce accrued benefits now could avoid more severe reductions in the future. For example, representatives from an association of actuaries and from a large plan noted that for some plans, the alternative to reductions in accrued benefits is eventual plan insolvency, which would result in the much lower benefit level guaranteed by PBGC compared to the current benefits paid and, possibly, little to no benefit at all if PBGC’s multiemployer insurance fund became insolvent. Finally, some experts and a plan representative stressed the urgency of obtaining such flexibility because the longer the delay, the greater the eventual required benefit reductions. Nonetheless, allowing plans the flexibility to reduce accrued benefits for current workers and retirees would significantly compromise one of the founding principles of ERISA and could impose significant hardship on some retirees. While some plan representatives and other stakeholders told us that a very modest benefit reduction would be sufficient to avoid insolvency, others noted that reductions would be very painful for retirees who worked for many years and planned their retirements around a promised benefit. Representatives of one of these plans referred to appeal letters to the plan that had been submitted by participants and/or their spouses, noting that older workers or retirees can be in some financially difficult situations, and cuts to accrued benefits would deepen and increase the number of such hardships. Some also noted that while younger retirees may be able to obtain employment to supplement income, older retirees, especially in physically demanding industries like mining and construction would likely not have that option. Finally, some stakeholders indicated that the flexibility to reduce accrued benefits would harm the multiemployer system by undermining the credibility of multiemployer plans and diminishing their ability to attract and retain employers and participants. Plan representatives and experts we contacted proposed a number of considerations and limitations that could mitigate some concerns with allowing plans to reduce accrued benefits. As described in table 6, these measures include eligibility criteria and options for oversight, along with other key features. For example, given the sacrifice it would impose on participants, several experts and plan representatives said that allowing reductions in accrued benefits should only be considered as a last resort for plans headed for insolvency. Even with these protections and considerations, the flexibility to reduce accrued benefits would not occur without considerable sacrifice, and may not be sufficient to help some plans avoid insolvency. Several plan representatives and experts said the suggested benchmark for reducing accrued benefits—PBGC’s guarantee level of $12,870 on an annual basis for 30 years of service—is relatively low and could result in steep benefit cuts. For example, given the magnitude of financial challenges facing some severely underfunded plans, accrued benefits may be reduced by one-third or more of their original value. Moreover, in the case of at least one plan, PBGC officials said that reductions to the maximum guaranteed level may still not represent sufficient savings to avert insolvency. For example, representatives of one large plan told us that while reducing accrued benefits might be an option for some plans, it was not an option for their plan because the benefits were already quite modest—average retirement benefits in 2010 were about $600 per month. Further, plan representatives said it would be unconscionable to reduce benefits for a retiree with a work-related illness, such as a respiratory ailment, who may be barely surviving on current benefit levels. According to several experts, in an effort to save plans and conserve PBGC assets in the long term, PBGC could provide financial assistance to qualifying plans headed for insolvency through a partition. If a plan qualifies and its application is approved by PBGC, the partition population includes only orphaned participants—those whose employer left the plan due to bankruptcy—and their benefits are reduced to the guaranteed level. According to industry experts, partitions would allow plans with a substantial share of orphaned liabilities to avoid further benefit reductions for active participants and other beneficiaries. By removing the burden of the legacy costs associated with orphaned participants, the plan would be in a better position to adequately fund benefit obligations with ongoing contributions. In addition, one expert said that partitions could reduce the total liability for PBGC because extending the solvency of the plan means that fewer participants would rely on benefit payments from the PBGC than if the whole plan were to become insolvent. While partitions may prevent qualifying plans from becoming insolvent, neither PBGC’s current partitioning authority nor its financial resources are sufficient to address the impending insolvency of large, severely underfunded plans. In its entire history, PBGC has performed partitions for only two plans. According to PBGC officials, plan representatives, and experts, there are a number of reasons why partitions have not been more widely used: The magnitude of potential reductions for orphaned participants has dissuaded some plans from applying for help. Payments for the partitioned population will be reduced to the PBGC guarantee level, which could be a sizable reduction in some cases. PBGC does not have sufficient resources to cover orphaned liabilities of large severely underfunded plans. Plans may not meet the four statutory criteria to be eligible for a partition. For example, a plan must demonstrate that it is headed for insolvency due to a reduction in contributions due to employer bankruptcies, which numerous plan representatives and experts said may exclude plans in need of assistance. Some plan representatives said that many of their contributing employers are small businesses that do not have the wherewithal to go through formal bankruptcy proceedings, but instead close without paying their full share of liabilities. In other cases, contributing employers may have left when the plan was adequately funded, but, as a result of the market crash in 2008, the funded status deteriorated. Consequently, the plan is not able to collect any ongoing contributions from those employers to offset the poor investment returns, but the plan is still responsible for paying the full amount of vested benefits for their workers. While the reasons employers leave a plan may vary, their departure can result in significant legacy costs that experts said impair the ability of the plan to remain solvent or recover from funding shortfalls. For example, according to officials from one of the largest plans, about 40 percent of benefit payments go to orphaned participants and current employer contributions amounted to only about 25 percent of total annual benefit payments as of 2009. To address this issue, several experts said that partitions should be made more widely available so that, for example, orphaned liabilities could include any participants whose contributing employer left the plan without paying their full share of unfunded vested benefits. However, to cover the cost of these benefits, several experts noted that PBGC would need additional funding—the agency does not have nearly sufficient resources to pay even the reduced benefit levels for potential partition populations from some large plans. As an example, representatives of one of the largest plans for which insolvency is reasonably possible in the mining industry indicated they may not be eligible for assistance through a partition because the plan was sufficiently funded until the 2008 financial crisis. In the absence of a partition, some members of Congress have proposed financial assistance using an existing separate source of funds established from reclamation fees paid by coal companies for abandoned coal mines. According to plan representatives, this fund currently provides money to pay for health benefits of three related plans, which have not used the full amount of those funds. The proposal would transfer any remaining funds that are not needed for health benefits to improve the solvency of the pension plan. The representatives also noted that this financial assistance is essential and the only way the plan can avoid insolvency. Pension benefits for this plan are relatively low—retirees received an average pension of about $600 a month in 2010, which limits the plan’s ability to improve its funded status even if reductions to accrued benefits were allowed. Numerous industry experts and plan representatives emphasized the importance of providing timely assistance to severely underfunded plans, but some experts also cited drawbacks of providing additional financial assistance beyond PBGC’s multiemployer insurance fund. Regarding advantages, several experts and plan representatives said providing additional financial assistance sooner rather than later could prevent entire plans from going insolvent and reduce the number of participants relying on guaranteed payments from PBGC in the long term. Beyond the scope of an individual plan, representatives from a construction industry group said additional financial assistance could also prevent more widespread negative effects. Because employers across various industries contribute to some of the large severely underfunded multiemployer plans, as well as other plans, the continued decline of such a plan could trigger a contagion effect. Contributing employers may face large liabilities (e.g., increased contributions, increased withdrawal liability) that could prevent them from fulfilling obligations to other currently well-funded plans and some employers may be forced out of business. Moreover, a plan representative and an expert said additional financial assistance is necessary to prevent the insolvency of the multiemployer insurance program, which, as described in the previous section, would leave thousands of participants with a small fraction of their vested pension benefits. However, other experts cited drawbacks for providing additional financial assistance. In particular, some experts said that a partition may not be a permanent fix for the plan. For example, if the on-going portion of the plan continues to lose employers, it may still become insolvent and require financial assistance from PBGC. In addition, some experts expressed concern about the size of the burden federal financial assistance could potentially place on taxpayers. Considering the resources that may be needed to provide financial assistance to troubled plans, PBGC and others have identified increased premiums as a potential source of additional revenue for PBGC. According to projections in a recent PBGC report, doubling the insurance premium from the current level of $12 per participant to $24 per participant would reduce the likelihood of PBGC insurance fund insolvency in 2022 from about 37 percent to about 22 percent. The analysis also found that a tenfold increase to $120 per participant would virtually eliminate the likelihood of multiemployer insurance fund insolvency by 2022, although the analysis did not look beyond that timeframe. However, some stakeholders we spoke with noted that increased premiums also have limitations and drawbacks. Some stakeholders said further premium increases alone were not a feasible solution because they would be insufficient to solve PBGC’s long-term funding shortfall and would further stress employers in severely underfunded plans who have already borne considerable contribution increases. According to a PBGC analysis, even a ten-fold increase in the current premium would not prevent significant growth in the agency’s deficit. Under this analysis, PBGC estimates that the FY2012 deficit of $5.2 billion would still nearly triple, amounting to about $15 billion in 2022. Moreover, it is unclear what impact such premium increases would have on plans of varying financial health, especially plans seeking to delay eventual insolvency. PBGC officials acknowledged that, although premiums are generally not a significant percentage of plan costs, the most severely underfunded plans may not be able to afford any increases. PBGC officials also said that, given the range of financial circumstances across plans, a premium structure that would ensure affordable and appropriate premiums for all plans could help address this concern. In prior work, we assessed changing the premium structure for PBGC’s single-employer program to allow premiums to vary based on risk. However, we have not assessed the implications or implementation of increased premiums or a risk-based premium structure for PBGC’s multiemployer program. Given the distinctive features of the multiemployer plan design and program described earlier in this report, the development of a risk-based premium structure for multiemployer plans would entail unique considerations and require further analysis. ERISA requires that employers wishing to withdraw from a multiemployer plan pay for their share of the plan’s unfunded liabilities. As explained in the following text box, this requirement for withdrawal liability payments is intended to prevent employers from walking away from liabilities they have created, and, thus, help protect plan participants and other employers. However, despite the necessity of such a safeguard, plan representatives and other industry experts said changes are needed to address key challenges related to current provisions regarding withdrawal liability. In the event an employer seeks to leave a multiemployer plan and the plan has a funding shortfall, the employer is liable for its share of unfunded plan benefits, known as withdrawal liability. A plan can choose from several formulas established in the law for determining the amount of unfunded vested benefits allocable to a withdrawing employer and the employer’s share of that liability. Under three of these formulas, the employer’s proportional share is based on the employer’s share of contributions over a specified period. In addition, the plan can apply for approval from PBGC to use variations on these methods. Liabilities that cannot be collected from a withdrawing employer, for example, one in bankruptcy, are to be “rolled over” and eventually funded by the plan’s remaining employers—frequently referred to as orphaned liabilities. As we previously reported, this means that an employer’s pension liabilities can become a function of the financial health of other employer plan sponsors. These additional sources of potential liability can be difficult to predict, increasing employers’ level of uncertainty and risk. However, while the total amount of withdrawal liability is based on the unfunded vested benefits for the plan as a whole, a particular employer’s annual payments are strictly based on its own contributions and are generally subject to a 20-year cap. Current federal withdrawal liability policies give rise to three main problems, according to stakeholders and experts. First, plans often collect far less than the full value of liabilities owed to the plan. In the event of an employer bankruptcy, several experts said plan sponsors often collect little or no withdrawal liability payments. For example, several experts explained that in the recent Hostess Brands bankruptcy, the firm—a contributing employer to many plans—is likely to pay very little of its withdrawal liability obligations. One service provider said this bankruptcy doubled the unfunded liabilities attributable to remaining employers in some plans. Separately, the method of calculating withdrawal liability payments may not capture an employer’s full share of unfunded liabilities because a plan’s withdrawal liability obligation is based on its prior contributions rather than on attributed liabilities, and is also subject to a 20-year cap. In particular, some stakeholders said the 20-year cap on withdrawal liability payments limits the amount of money collected by plans. If the amount of the employer’s prior contributions is small relative to the size of their total withdrawal liability, the annual payments may not be sufficient to pay off their total withdrawal liability over the 20-year period. Second, existing withdrawal liability rules deter new employers from joining a plan with existing unfunded liabilities. Plan representatives said attracting new employers is essential to the long-term health of the plan, but an employer group said the existence of potential withdrawal liability strongly deters prospective employers who may otherwise want to join. Moreover, fear of greater withdrawal liability in the future may encourage current contributing employers to leave the plan. For example, in late 2007, UPS paid about $6 billion to withdraw from one of the largest multiemployer plans. Third, the presence of withdrawal liability can negatively affect an employer’s credit rating and ability to obtain loans for their business. For example, representatives from one large employer said their total withdrawal liability exceeds the net worth of their company and this has made it difficult for them to obtain loans and other financing, which might help revitalize their business. Table 7 describes options to address these problems identified through our contacts with various stakeholders, including plan and employer representatives. A comprehensive remedy to the problems arising from withdrawal liability is particularly elusive because a solution to one issue can exacerbate another. For example, eliminating the current 20-year cap may help allow plans to collect withdrawal liability payments until the full amount has been paid. However, increasing the amount of withdrawal liability that plans can collect may also discourage new employers from participating in a plan because it increases the potential withdrawal liability they could be required to pay. On the other hand, options that could reduce the deterrent effect on new employers—such as the proposal to omit contributions required by funding improvement or rehabilitation plans from withdrawal liability calculations—could reduce a plan’s ability to collect sufficient withdrawal liability. Numerous plan representatives, experts, and the NCCMP Commission recommend the adoption of a more flexible DB model to avoid a repetition of the current challenges facing multiemployer plans. While the specific plan design can vary, in general, this model allows trustees to adjust benefits based on key factors—such as the plan’s funded status, investment returns, or plan demographics—to keep the plan well-funded. Importantly, it reduces the risk that contributing employers would face contribution increases if the plan experiences poor investment returns or other adverse events. Investment risk is thus primarily shared by participants and the plan is designed to avoid incurring any withdrawal liability. Overall, the trustees of the plan would have greater flexibility than under a traditional DB plan to adjust benefits to keep the plan well- funded. See table 8 for a comparison of two alternative flexible DB plan designs, although other models could also be used. In addition, the NCCMP Commission’s proposal would also give more flexibility for traditional DB plans by allowing these plans to adjust the normal retirement age to harmonize with Social Security’s normal retirement age. Notably, the Cheiron proposal would also use more conservative approaches to investment and funding policy because it uses a relatively lower assumed rate of return and a contingency reserve fund. The Cheiron proposal calls for a more conservative asset allocation and, in addition to sharing some of the investment risk with participants through the flexible benefit design, would also reduce the overall amount of investment risk through the more conservative asset allocation. In addition, the Cheiron model would use a contingency reserve fund that could provide a cushion against unfavorable investment or demographic experience. The design of a flexible DB plan offers several key benefits, which some stakeholders said are essential to the long-term survival of the multiemployer system. In particular, several stakeholders cited limiting employer liability as a key benefit. Representatives of several employers said it is imperative to limit their liability to enable them to be competitive against competitors. By minimizing risks to employers, a flexible DB model may strengthen employers’ commitment to the plan and reduce incentives for them to leave. Similarly, reducing risk may also help attract new employers to these plans, which may improve a plan’s demographics and help it stay well-funded and viable in the long term. Additionally, a group of employer representatives said that a flexible DB plan, such as the one developed by Cheiron, in conjunction with the United Food and Commercial Workers (UFCW) International Union, provides trustees more tools to prudently manage the plan to keep it well-funded and able to pay promised benefits even when faced with adverse events, such as poor investment returns or demographic shifts. Moreover, some stakeholders said that a flexible DB plan reduces risk while also avoiding challenges associated with defined contribution (DC) plans. Specifically, representatives of a construction industry group said a flexible DB plan would still offer pooled and professionally managed investments, along with risk sharing among participants, which can mitigate some of the individual risks faced by participants in DC plans, such as investment risk and longevity risk. Given the potential long-term benefits of a flexible DB model, some experts said regulatory agencies could do more to help plans adopt a design with these features. For example, one expert said that PBGC could hold a conference on best practices in plan design. In addition, this expert said that PBGC could charge such plans lower premiums commensurate with their lower risk to encourage adoption of these plan design features; however, PBGC lacks the legal authority to do so. Some plan representatives and experts also noted that a flexible DB model entails tradeoffs. In particular, representatives from an actuarial firm and from an industry group said that while this approach shows promise for addressing prospective challenges, it does not help resolve problems for plans that already have financial shortfalls. Union and employer representatives said that plans may first need to address existing shortfalls before they could adopt a flexible DB model. Thus, new design options are unlikely to help large plans that are already severely underfunded. Further, in the flexible benefit models described in table 8, investment risk is primarily borne by participants. Representatives from an actuarial firm also said that this model may be relatively expensive when comparing the amount of contributions needed to attain a certain level of accrued benefits. For example, in order to minimize the risk of underfunding, a flexible DB plan may use a relatively low assumed rate of return—and, correspondingly, a more conservative investment strategy— than is more commonly used by multiemployer plans. Over the long term, this may result in a lower level of accrued benefit. However, representatives of one actuarial firm said that higher assumed rates of return used by some multiemployer plans may be too high and could entail a greater risk of the plan becoming underfunded. And, as recent events show, participants already assume a lot of risk in the event a plan becomes severely underfunded. As a result, a flexible benefit model that reduces risk might provide a somewhat lower promised benefit, but one that is more secure. Facilitating more plan mergers or allowing plans to form alliances may also help address financial challenges facing multiemployer plans, according to some plan representatives and industry experts. In a merger, two or more plans are combined into a single plan, including both plan assets and administration. Several stakeholders said that this consolidation helps plans—especially smaller plans—achieve more favorable economies of scale to reduce costs. For example, in a merger, plans can reduce costs by consolidating administrative services, such as annual audits and legal services. In some cases, PBGC provides financial assistance to facilitate a merger by paying a plan that is insolvent or nearing insolvency a portion of the present value of PBGC’s net liability for that plan, which serves as an incentive for a well-funded plan to take on the assets and liabilities of a less well-funded plan. PBGC officials said that they are careful to provide financial assistance only in the case of mergers expected to be successful and, thus, avoid paying financial assistance twice to the same plan. While PBGC has helped to facilitate some mergers, several plan representatives and a representative from an actuarial firm said more plans could merge if PBGC provided additional financial assistance. Alternatively, other stakeholders said similar cost- saving benefits from consolidation could be achieved by allowing plans to form alliances. In contrast to a merger, alliances allow plans to combine administrative and investment management services, but retain separate liabilities and funding accounts. Consequently, in an alliance, each plan would retain its own liabilities and withdrawal liability obligations would not be shared across plans. Along with cost savings from consolidating administrative services, plan representatives and industry experts said mergers and alliances can offer other important benefits. In particular, a merger or alliance would provide plans a larger asset pool that can also help plans reduce investment management fees. According to a representative from an actuarial firm, combined with administrative cost savings, consolidating investment management services can significantly reduce costs for small plans and may save some from insolvency. For example, some of their small plan clients pay between 30 and 40 percent of contributions towards administrative and investment management expenses while a larger plan would pay closer to 5 percent. However, another expert said cost savings for some plans may be negligible depending on the plan’s circumstances. For example, if a plan is already sufficiently large and efficiently managed, cost savings from merging with another plan may be relatively small. In addition, several stakeholders said that by helping plans avoid insolvency, PBGC may also benefit from plan mergers or alliances because the participants of these plans would continue to receive benefits from the plan rather than becoming insolvent and relying on benefit payments from PBGC. Consequently, the cost PBGC incurs to facilitate such arrangements may be more than offset by preventing the plan from becoming insolvent. While mergers can provide cost-savings and other benefits, plans face barriers to implementing them. For example, representatives from one of the largest plans said that due to the relatively large size of their plan and the amount of their funding shortfall, a merger is not an option for them. Several stakeholders said a merger between a plan that is relatively well- funded and a financially weaker plan poses concerns for plan trustees who have a fiduciary duty to act in the best interests of their plan’s participants. One employer representative said that a merger poses risks to the healthier plan and may not be in the best interests of those participants. To address potential risks to the healthier plan, some stakeholders said PBGC should be given greater resources to facilitate more mergers. In addition, some employer representatives said plans that undertake mergers could be afforded legal protection under a safe harbor to further alleviate concerns over fiduciary responsibility. While alliances may avoid some of these concerns—they would not require plans to harmonize their funding status as each plan retains its own liabilities— such arrangements are not currently permitted and would therefore require a change in law according to NCCMP. Despite unfavorable economic conditions, most multiemployer plans are currently in adequate financial condition and may remain so for many years. However, a number of plans, including some very large plans, are facing very severe financial difficulties. Many of these plans reported that no realistic combination of contribution increases or allowable benefit reductions—options available under current law to address their financial condition—will enable them to emerge from critical status. As a result, without Congressional action, the plans face the likelihood of eventual insolvency. While the multiemployer system was designed to limit PBGC’s exposure by having employers serve as principal guarantors, PBGC remains the guarantor of last resort. However, given their current financial challenges, neither the troubled multiemployer plans nor PBGC currently have the flexibility or financial resources to fully mitigate the effects of anticipated insolvencies. Should a critical mass of plan insolvencies drain the PBGC multiemployer insurance fund, PBGC will not be able to pay either current or future retirees more than a very small fraction of the benefit they were promised. Consequently, a substantial, and in some cases catastrophic, loss of income in old age looms as a real possibility for the hundreds of thousands of workers and retirees depending on these plans. Congressional action is needed to avoid this scenario, and stakeholders suggested a number of key policy options. For example, various stakeholders suggested that, as a last resort to avert insolvency, Congress could enact legislation permitting plans—subject to certain limitations, protections, and oversight—to reduce accrued benefits of both working participants and retirees. In addition, some stakeholders suggested that Congress could give PBGC the authority and resources to assist the most severely underfunded plans. Stakeholders acknowledged that each of these options poses tradeoffs. Providing PBGC with additional resources, as well as other more direct financial assistance to plans, would create yet another demand on an already strained federal budget. Similarly, reducing accrued benefits for active workers, and especially for those already in retirement, could result in significant reductions in income for a group that may have limited income alternatives and may be too infirm to return to the labor force. Such an option would also significantly compromise one of the key founding principles of ERISA—that accrued benefits cannot be reduced— essentially rupturing a promise to workers and retirees who have labored for many years, often in dangerous occupations, and in some of the nation’s most vital industries. The scope and severity of the challenges outlined by stakeholders suggest that a broad, comprehensive response is needed and Congress faces difficult choices in responding to these challenges. However, as the recent tri-agency federal report on multiemployer plans noted, unless timely action is taken to provide additional tools for the multiemployer plan trustees to stabilize the financial conditions of their plans, more costly and intrusive measures may later be necessary. Nevertheless, this situation can also be viewed as an opportunity both to protect the benefits of hundreds of thousands of older Americans and stabilize a pension system that has worked fairly well for decades. Without a comprehensive approach, efforts to improve the long-term financial condition of the multiemployer system may not be effective. Given the serious challenges facing PBGC’s multiemployer insurance fund and critically underfunded multiemployer plans, and to prevent the significant adverse effects of PBGC insolvency on workers and retirees, Congress should consider comprehensive and balanced structural reforms to reinforce and stabilize the multiemployer system. In doing so, Congress should consider the relative burdens, as identified by key stakeholders, that each reform option would impose on the competing interests of employers, plans, workers and retirees, PBGC, and taxpayers. We provided a draft of this report to the Department of Labor, the Department of the Treasury, and the PBGC for review and comment. We received formal written comments from the PBGC, which generally agreed with our findings and analysis. During the review period, PBGC officials raised the potential role that increased multiemployer insurance program premiums could play in strengthening the program, and hence in helping to ensure that participants in insolvent plans received some financial protection in the long term. In addition, the issue of PBGC premiums was raised repeatedly during a March 5, 2013 hearing held by the House Subcommittee on Health, Employment, Labor and Pensions, Committee on Education and the Workforce. In light of the level of interest on this issue, we included a brief discussion of the matter of premiums in the final version of our report. PBGC, Labor, and Treasury also provided technical comments which we incorporated as appropriate. PBGC’s formal comments are reproduced in appendix II. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to relevant congressional committees, PBGC, the Secretary of Labor, the Secretary of the Treasury, and other interested parties. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact Charles Jeszeck at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are found in appendix III. Our objectives were to answer the following research questions: 1) What actions have multiemployer plans in the weakest financial condition taken in recent years to improve their long-term financial position? 2) To what extent have plans relied on PBGC assistance since 2009, and what is known about the prospective financial condition of the multiemployer plan insurance program? 3) What options are available to address PBGC’s impending funding crisis and enhance the program’s future financial stability? We sought to answer the first question in two primary steps. First, we obtained data on the results of a survey of critical status plans performed by The Segal Company, a large actuarial firm that has a client base consisting of about 25 percent of all multiemployer plans, representing about 30 percent of all multiemployer plan participants. As figure 13 below illustrates, the industry distribution of Segal’s client base substantially parallels that of the broader multiemployer universe. Included as an addendum to Segal’s annual survey of plan funded status, the survey instrument requested information about the nature and size of contribution increases and benefit reductions, whether plans expected to emerge from the critical zone within statutory time frames, and the estimated number of years until emergence from the critical zone or, for plans not expecting to emerge, the number of years to plan insolvency. The information pertaining to each of the 107 critical plans in the survey was completed by Segal’s professional actuaries responsible for those clients. The survey was initiated in December 2010, and responses were received through February 2011. Through a review of the methodology underlying the survey, and discussions with a Segal representative knowledgeable about the survey, we determined that the results were reliable and useful for our research. Second, we supplemented this data with in-depth interviews with representatives of 13 multiemployer plans— 8 were in critical status, 2 in endangered or seriously endangered status, and 3 in neither critical nor endangered status. We selected the plans to ensure that we included a range of plan sizes, industries, geographical areas, and funding status. Plans selected ranged in size from about 2,000 participants to more than 531,000 participants and represented a variety of industries including those featuring some of the largest concentrations of multiemployer plans—construction, manufacturing, and transportation. Before speaking with plan officials, we reviewed available data, including rehabilitation or funding improvement plans, and other relevant documents. Our in-depth discussions with plan representatives covered various issues, including plans’ use of and views regarding funding relief, the nature and size of the contribution increases and benefit reductions, and the probable impact of contribution increases and benefit reductions on employers and plan participants. To answer the second question, we interviewed officials and analyzed data from PBGC, including recent PBGC annual reports and data books. We also developed several data requests for PBGC that were tailored to this objective, and reviewed information provided by PBGC in response. For example, we obtained data on the amount of PBGC’s annual assistance to plans due to plan insolvencies, plan partitions, and assistance granted for other reasons, such as plan mergers or closures. We also obtained and analyzed updated data regarding PBGC’s overall financial position and the size of its long-term deficits. Specifically, we obtained data on the liabilities attributable to plans on PBGC’s list of plans that are insolvent or considered likely to become insolvent in the next 10 years, as well as those thought likely to become insolvent in the next 10 to 20 years. To better understand the consequences of plan insolvency on retirees, we interviewed relevant PBGC officials and requested data regarding the impact of insolvency on retirees of various wage levels and tenures. Finally, we discussed the impact of potential PBGC insolvency in our discussions with multiemployer plan officials. To answer the third objective, we distinguished between options that would address the more immediate funding crisis facing plans headed toward insolvency and options that may enhance the long-run stability of the multiemployer system for plans that may not be headed for insolvency, but, nevertheless, face financial challenges. We assessed the tradeoffs of various options for current workers, retirees, and employers, as well as the federal government. To identify and assess available options, we interviewed a wide range of pension experts—including academics, actuaries, attorneys, plan trustees and administrators, employers and trade associations, unions, advocacy organizations, government officials, and other relevant stakeholders. We also reviewed relevant research and documentation, including a proposal by the National Coordinating Committee for Multiemployer Plans (NCCMP) and research by other industry experts. As appropriate for each of our objectives, we reviewed existing literature and relevant federal laws and regulations. We conducted this performance audit from March 2012 through March 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, David Lehrer (Assistant Director), Michael Hartnett, Sharon Hermes, and Kun-Fang Lee made key contributions to this report. In addition, support was provided by Frank Todisco, GAO Chief Actuary, James Bennett, David Chrisinger, Julianne Cutts, Jessica Gray, Theresa Lo, Ashley McCall, Sheila McCoy, and Walter Vance.
|
Multiemployer pension plans--created by collective bargaining agreements including more than one employer-- cover more than 10 million workers and retirees, and are insured by the PBGC. In recent years, as a result of investment market declines, employers withdrawing from plans, and demographic challenges, many multiemployer plans have had large funding shortfalls and face an uncertain future. GAO examined (1) actions that multiemployer plans in the weakest financial condition have taken to improve their funding levels; (2) the extent to which plans have relied on PBGC assistance since 2009, and the financial condition of PBGC's multiemployer plan insurance program; and (3) options available to address PBGC's impending funding crisis and enhance the multiemployer insurance program's future financial stability. GAO analyzed government and industry data and interviewed government officials, pension experts--including academics, actuaries, and attorneys, multiemployer plans' trustees and administrators, employers and trade associations, unions, advocacy organizations, and other relevant stakeholders. The most severely distressed multiemployer plans have taken significant steps to address their funding problems and, while most plans expected improved financial health, some did not. A survey conducted by a large actuarial and consulting firm serving multiemployer plans suggests that the large majority of the most severely underfunded plans--those designated as being in critical status--either have increased or will increase employer contributions or reduce participant benefits. In some cases, these measures will have significant effects on employers and participants. For example, several plan representatives stated that contribution increases had damaged some firms' competitive position in the industry, and, in some cases, threatened the viability of such firms. Similarly, reductions in certain benefits--such as early retirement subsidies--may create hardships for some older workers, such as those with physically demanding jobs. Most of the 107 surveyed plans expected to emerge from critical status, but about 25 percent did not and instead seek to delay eventual insolvency. The Pension Benefit Guaranty Corporation's (PBGC) financial assistance to multiemployer plans continues to increase, and plan insolvencies threaten PBGC's multiemployer insurance fund's ability to pay pension guarantees for retirees. Since 2009, PBGC's financial assistance to multiemployer plans has increased significantly, primarily due to a growing number of plan insolvencies. PBGC estimated that the insurance fund would be exhausted in about 2 to 3 years if projected insolvencies of either of two large plans occur in the next 10 to 20 years. More broadly, by 2017, PBGC expects the number of insolvencies to more than double, further stressing the insurance fund. PBGC officials said that financial assistance to plans that are insolvent or are likely to become insolvent in the next 10 years would likely exhaust the insurance fund within the next 10 to 15 years. If the insurance fund is exhausted, many retirees will see their benefits reduced to an extremely small fraction of their original value because only a reduced stream of insurance premium payments will be available to pay benefits. Experts and stakeholders cited two policy options to avoid the insolvencies of severely underfunded plans and the PBGC multiemployer insurance fund, as well as other options for longer term reform. Experts and stakeholders said that, in limited circumstances, trustees should be allowed to reduce accrued benefits for plans headed toward insolvency. Also, some experts noted that, in their view, the large size of these reductions for some severely underfunded plans may warrant federal financial assistance to mitigate the impact on participants. Experts and stakeholders also noted tradeoffs, however. For example, reducing accrued benefits could impose significant hardships on some retirees, and any possible financial assistance must be considered in light of the existing federal debt. Options to improve long term financial stability include changes to withdrawal liability--payments assessed to an employer upon leaving the plan based on their share of unfunded vested benefits--to increase the amount of assets plans can recover or to encourage employers to remain in or join the plan. In addition, experts and stakeholders said an alternative plan design that permits adjustments in benefits tied to key factors, such as the funded status of the plan, would provide financial stability and lessen the risk to employers. These and other options also have important tradeoffs, however. Congress should consider comprehensive and balanced structural reforms to reinforce and stabilize the multiemployer system. PBGC generally agreed with our findings and analysis.
|
Figure 1 is a map of the Delaware River Ship Channel that also shows the locations of various project features discussed throughout the report. The Delaware River project plan calls for deepening the main navigation ship channel from the mouth of the Delaware Bay through Philadelphia Harbor, and on to Beckett Street Terminal in Camden, New Jersey—a distance of 102.5 miles. The project includes plans for constructing three new disposal facilities for dredged material, called confined disposal facilities, in Gloucester and Salem counties, New Jersey. Two of these new disposal facilities would be needed to maintain the current channel, even if the project were not built. The new facilities and 10 other existing facilities would accommodate the material dredged during the construction of the deeper channel and during the 50-year maintenance period that would follow. The project also includes plans to restore two wetland areas, one in New Jersey and the other in Delaware, and to replenish a beach site in Delaware. The Delaware River Port Authority, the nonfederal sponsor, would share in the costs of the project, according to requirements in the Water Resources Development Act of 1986 and a project cooperation agreement that would need to be signed with the Corps before beginning construction. The Port Authority would be responsible for contributing 25 percent of the total costs of the project’s general navigation features— largely constructing and dredging—and for providing lands, easements, relocations, and rights-of-way necessary for the project. The Port Authority would pay an additional 10 percent of the general navigation feature costs after receiving credit for providing for such items as lands for dredged material disposal areas. The three states that would be affected by the channel deepening, Delaware, New Jersey, and Pennsylvania, are expected to contribute funds toward the Port Authority’s share of the project. The Philadelphia district office of the Corps of Engineers is leading the effort to prepare the various studies and documents required for the project. It completed a Final Interim Feasibility Study and Environmental Impact Statement for the project in 1992. This document was used to inform decision-makers and the public of the Corps’ recommended plan for the project, potential alternatives to it, its benefits and costs—annualized over a 50-year period—and its likely environmental effects. The Corps then prepared a design memorandum in 1996, which provided details on the final design and engineering plans for the project, and published a Supplemental Environmental Impact Statement in 1997. In its Limited Reevaluation Report of 1998, the Corps updated the project’s benefits and costs. Approval of this report constituted the Corps’ decision to budget construction funds for the project. Corps guidance and procedures require that key decision documents such as the Feasibility Study and the Limited Reevaluation Report undergo review by district officials; the Corps’ North Atlantic division in Brooklyn, New York; and the Corps’ Office of Civil Works in Washington, D.C., before receiving final approval. On April 22, 2002, the Corps’ Director of Civil Works suspended work on the project pertaining to the project cooperation agreement, plans and specifications, and advertising for construction, until questions pertaining to the project justification have been resolved. The Corps’ analysis of project benefits contained or was based on miscalculations, invalid assumptions, and outdated information. After taking these problems into consideration, we found that the project benefits for which there is credible support would be about $13.3 million a year, as compared to the $40.1 million a year claimed in the Corps’ 1998 Limited Reevaluation Report. Some of the major problems we identified in the Corps’ analysis of project benefits are discussed below. Based on a number of miscalculations, the Corps’ analysis overstated annual project benefits by about $8.6 million. In one instance, the Corps misapplied the projections of commodity growth rates for traffic in the Delaware River ship channel when estimating future project benefits. For example, for oil imports from West Africa, the underlying data indicated that the appropriate predicted growth rate for 1992 through 2000 would be 5.8 percent, and 1.4 percent for 2001 through 2005. However, the Corps applied the 5.8 percent growth rate to the entire 1992 through 2005 time period and repeated the mistake by incorrectly applying predicted rates elsewhere in its analysis. In aggregate, this miscalculation led to about a $4.4 million overestimate of annual project benefits. Corps headquarters officials agreed with our analysis. After taking the Corps’ misapplication of growth rates into consideration, there remained about a $4.7 million gap between the Corps’ estimated annual project benefits and the outcome of our efforts to replicate its results. The Corps’ economist for the project told us that this gap was created by a computer error and speculated that it could have occurred when files were transferred from one program to another. Ultimately, however, the Corps was unable to definitively explain the discrepancy between its original estimate and our attempt to replicate its estimate and acknowledged that the error overstated project benefits by about $4.7 million. Corps headquarters officials agreed with our analysis. The Corps also inconsistently discounted the project’s future benefits to determine their net present value. Specifically, the Corps used different discount rates, realized benefits at different times of the year, and used different 50-year project time periods for the various benefit categories. For example, the Corps estimated project benefits for coal shipments for the period 2005 through 2055 (note that this is 51 years, not 50 years), while it estimated benefits for container ships from 2000 through 2050. Also, the Corps used an 8.75 percent discount rate to discount the coal benefits for present value purposes, but used 7.625 percent for crude oil. The Corps’ economist for the project acknowledged the errors and noted that the discounting procedures used for net present value purposes should have been consistently applied. Taking these errors into consideration, as called for by applicable Corps guidelines for water resource projects, we found that annual project benefits would be about $0.4 million less than the Corps had projected. Corps headquarters officials agreed with our analysis. Finally, the Corps presented its benefit estimates in dollar values for different years for the various benefit categories. The Corps stated in its 1998 Limited Reevaluation Report that the benefit and cost estimates were in 1996 dollars. However, this was not true for any of the benefit categories: coal benefits were calculated in 1991 dollars; crude oil, iron ore, and scrap metal benefits in 1993 dollars; and container benefits in 1995 dollars. A basic principle of benefit-cost analyses is that benefit and cost estimates be given in the same year dollar values. Without such consistency, it is not possible to accurately compare project benefits and costs. Taking this mistake into account, we found that estimated project benefits increased by about $0.9 million. Corps headquarters officials agreed with our analysis. Based on a number of invalid assumptions, the Corps’ analysis overstated annual project benefits by about $9.4 million. The Corps’ analysis of the benefits that a 45-foot channel would be expected to produce was based on several components, such as time at sea, time in port, tonnage shipped, and average shipping cost per unit of cargo. For certain crude oil vessels, one of these components is time savings in unloading crude oil. Currently, crude oil vessels that are loaded to a hull depth of more than 40 feet must stop at the mouth of the Delaware River to transfer crude oil to smaller vessels (a process called lightering) until the ship’s hull is no deeper than 40 feet below the surface. This transfer of cargo takes time and thus increases costs. With a deeper channel, such ships would spend less time lightering, or might not need to lighter at all, thus reducing costs. In calculating the economic benefits that a 45-foot channel would produce, the Corps assumed time savings from reduced lightering at both the port of origin and the port of destination. However, the benefits of reduced lightering are realized only at the destination of the voyage. Thus, the Corps double-counted this benefit, resulting in an overstatement of about $2.6 million in annual project benefits. Corps headquarters officials agreed that its analysis double-counted lightering time savings and therefore overstated annual project benefits. The Corps’ economic analysis also claimed project benefits for predicted shipments of crude oil on vessels that require a channel depth of only 40 feet. Currently, some of the vessels that deliver crude oil to the Philadelphia area refineries require less than a 40-foot channel depth, but they have the capacity to more fully load, and thus could potentially take advantage of a deeper channel. To estimate the benefits of a 45-foot channel for these vessels, the Corps used a statistical model (two stage least squares) to predict how much oil these vessels would carry if a 45-foot channel were available. The model predicted benefits by analyzing 137 combinations of ship types and trade routes. The model predicted that only 32 of these 137 combinations (or 23 percent) would require a channel depth of greater than 40 feet to make the trip upriver to the oil refineries. Nonetheless, the Corps assumed benefits for all 137 combinations. The Corps’ economist was unable to provide a clear rationale for claiming benefits for the 105 ship-type/trade-route combinations that its model predicted would not benefit from a deeper channel. The result was that the Corps claimed about $3.0 million in annual project benefits that were not supported by its model. Corps headquarters officials told us they believe that a greater number of these vessels could benefit from additional channel depth, but they could not verify their model. We agree that the deeper channel could benefit crude oil vessels with sailing drafts of less than 40 feet, but the amount of such benefits cannot be determined without a comprehensive reanalysis. We also found that the Corps’ container ship benefit analysis was based on several invalid assumptions. First, the Corps incorrectly assumed the same one-way distance (3,600 nautical miles) for each of several different container ship trade routes (Australia, East Coast of South America, Europe, and the Mediterranean). For example, the one-way distance from Australia to Philadelphia via the Panama Canal is about 10,000 nautical miles. Further, for the Australia-to-Philadelphia route, the Corps assumed that with a 45-foot channel containers would be shipped on larger vessels and would go through the Suez Canal—as opposed to using the Panama Canal, the current trade route for this traffic. But, the Suez Canal trade route is significantly longer than the Panama Canal trade route, raising serious questions about whether shipping via the Suez Canal would be more cost-effective. After taking this invalid assumption regarding distances into account, and using the Corps’ methodology, we found that—even with a 45-foot channel depth—the least costly container ship trade route from Australia to Philadelphia remains through the Panama Canal. Furthermore, vessels using the Panama Canal are currently restricted to sailing drafts of 39 feet 6 inches. Thus, the benefits of the deeper channel for this trade route would be minimal. After taking the invalid assumptions in the Corps’ analysis into account, we estimated that annual container ship benefits would be about $1.7 million less than the Corps estimated. Corps headquarters officials acknowledged our findings and their relative impact, as calculated on the basis of the information presented in the 1998 Limited Reevaluation Report. However, they now believe that container ship benefits may be higher than presented in the 1998 report because of changed shipping patterns. While changes have occurred in the container shipping industry, it would be premature to speculate on the effect these changes would have on project benefits without a comprehensive reanalysis. Moreover, the Corps also incorrectly assumed the same distance for different trade routes when estimating benefits for the category of crude oil vessels with sailing drafts less than 40 feet. Taking this invalid assumption into account reduced annual project benefits by about $1 million. Finally, we identified about $1.0 million in additional overestimated annual benefits due to other invalid assumptions related to the analysis for scrap metal, iron ore, and coal commodities. Much of the baseline information that underpins the Corps’ project benefit analysis dates from 1985 through 1991. Thus, the data are outdated and do not reflect current shipping practices and trends. For example, the data the Corps used in its economic analysis led to a projection that crude oil imports up the Delaware River would increase by over 20 million short tons from 1992 though 2000. However, our review of available data indicated that crude oil imports increased by only about 10 million short tons over this period. We identified a number of instances, discussed below, in which the information used in the Corps’ analysis was outdated. Where possible, we updated the information, and found that the Corps’ analysis overstated total annual project benefits by about $8.8 million. The Corps’ 1992 feasibility study included benefits for the time savings related to reduced lightering of crude oil by tankers with a sailing draft of greater than 40 feet. The Corps estimated that crude oil could be unloaded from crude oil tankers to the refineries’ storage tanks about twice as fast as it could be transferred to lightering vessels. Since that time, however, the company that provides lightering services in the Delaware River has modified its fleet of vessels that performs this service. Based on information provided by several refineries and the lightering company, lightering rates are almost the exact opposite of those used in the Corps’ analysis. According to these sources, crude oil can be transferred from oil tankers to lightering vessels almost twice as fast as it can be unloaded from oil tankers to refineries. The Corps’ use of the outdated information resulted in overstating annual project benefits by about $3.2 million. As discussed previously, the Corps overestimated the projected growth in crude oil imports. Substituting the predicted growth rates used by the Corps with actual growth rates based on historical import data to the Philadelphia region (at the time of the 1998 Limited Reevaluation Report), we found that the Corps’ annual benefits were overestimated by about $3.5 million. In commenting on a draft of this report, the Corps stated that its crude oil projections (1992 through 2000) were in line with actual recorded tonnage. However, this statement is misleading because the crude oil import data that the Corps used to make this claim were inconsistently collected between 1992 and 2000. Nevertheless, the Corps stated in its comments that its project reanalysis would need to verify the database used to establish current and historic shipments to ensure data reliability. In addition, the Corps’ predicted growth rates for container ship imports for some trade routes were also overestimated; substituting the predicted growth rates with actual growth rates, we found that the Corps’ annual benefits were overestimated by about $0.3 million. Finally, the Corps’ analysis included benefits for exporting scrap metal to Turkey and importing coal and iron ore. However, since the time of the Corps’ 1992 analysis, trade of these commodities on the Delaware River has greatly declined. Updating for this information, we found that benefits for scrap metal, iron ore, and coal were reduced by about $1.7 million a year (beyond the $1.0 million benefit reduction mentioned earlier). Corps headquarters officials concurred that shipments of scrap metal, coal, and iron ore have decreased since the 1998 Limited Reevaluation Report but stated that shipments of these commodities increased from 2000 to 2001 and may warrant further analysis. Table 1 shows the Corps’ 1998 benefit estimates and summarizes errors in those estimates based on our evaluation of the Corps’ analysis. It is important to note that because of the numerous shortcomings in the Corps’ analysis, the actual project benefits cannot be reliably known without a comprehensive reanalysis. To be complete, such a reanalysis would need to account for the miscalculations and invalid assumptions we identified. Furthermore, it would need to comprehensively update the data used in the 1998 analysis to account for current shipping trends on the Delaware River, and reexamine the methodology used to estimate benefits. The Corps has made several changes to reflect project updates and correct for cost estimating errors since the 1998 Limited Reevaluation Report. Some of these changes—reducing the volume of material and locations where dredging would need to be performed—would reduce annual project costs. But this cost reduction would be offset by several other updates and corrections that would increase project costs. Accounting for these increases and decreases, in aggregate, annual project costs would likely be about $27 million (in 2001 dollars) rather than the $28.8 million estimated by the Corps in the 1998 Limited Reevaluation Report. However, other Corps costs were based on outdated information, contained errors, or did not take into account all pertinent information. While the Corps has not yet addressed these problems, doing so would likely increase project costs. Because of the interrelationship among the cost categories, the effects of the individual updates and corrections cannot be readily isolated from each other. The Corps has refined its cost estimate to account for new information. Originally, assuming that the overall depth of the existing channel was 40 feet, the Corps estimated the amount of material to be dredged at 33 million cubic yards. However, new information developed by the Corps indicates that parts of the channel are already deeper than 40 feet. Thus, the Corps has reduced its estimate of the material to be dredged to approximately 26 million cubic yards, thereby lowering the costs of dredging. Further, new surveys of the main ship channel and the use of new technology have given the Corps more accurate information about areas of the channel that are 45 feet or deeper already and will not need to be dredged. The new technology—side scan sonar—provides more accurate mapping of the contour of the shipping channel, thereby enabling the Corps to determine that less total area needs to be dredged than it previously believed. Thus, costs for construction and equipment have declined. In the process of reestimating project costs, the Corps decided to extend the construction period from 4 years to 5 years. This extension resulted from concerns that additional dredging equipment needed to finish the project in 4 years might not be available when necessary. Moreover, a Corps official told us that the Corps was concerned that it might not be able to obtain the funding necessary to construct the project in 4 years. Additionally, the Corps’ 1998 analysis included the cost of purchasing four new confined disposal facilities, but one of these facilities is no longer available. The Corps now plans to take the dredge material excavated during construction that was intended for this facility to another location farther away. The Corps has revised the disposal costs for the construction phase of the project to reflect this change. Also, during our review of the Corps’ cost estimate, we identified a number of omissions. For example, in its 1998 cost estimate, the Corps did not include construction costs for confined disposal facilities in its summary calculations for maintaining the 45-foot channel. A Corps official was unable to explain why this occurred, but the Corps has since corrected for the omission. Finally, we identified a number of errors in the Corps’ cost estimate, one having to do with inconsistent discounting. In estimating costs for maintenance dredging, the Corps used end-year discounting, but for construction costs and benefit calculations, it used different discount periods. As discussed earlier, benefits and costs should be determined using consistent discounting procedures. The Corps agreed that mid-year discounting is appropriate and has updated project costs for this. The Corps has not updated its cost estimates for maintenance dredging and deepening the side channels that connect the main channel to the benefiting firms’ loading docks. Moreover, a number of specific errors and omissions in the cost estimate remain to be addressed; making the necessary corrections would likely increase project costs. For example, the cost estimate for maintaining a 45-foot channel has not been revised to reflect that one of the disposal facilities is no longer available. The alternative location is more distant from the dredging operation. Corps officials agreed that this problem exists in the cost estimate and that the additional distance would increase costs. At the time of our review, the Corps had not calculated how much this correction would increase project costs. Corps headquarters officials believe that updating maintenance and berthing area cost estimates to correct for outdated data and inaccuracies would have a marginal impact on the total project costs. However, the full extent of the impact cannot be accurately estimated until project costs have been completely reanalyzed. In addition, the Corps’ current cost estimate assumes that maintenance dredging for the 45-foot channel would begin after the last year of construction and continue for 50 years. But maintenance dredging for some sections of the channel could begin before the 5-year construction phase of the project is completed because the sections that are to be deepened to 45 feet in the first years of construction would likely start to fill in as sand and silt resettle in the channel. The Corps’ estimate for maintenance costs does not account for the fact that some costs should be inflated and others discounted to reflect that maintenance in certain sections of the channel would need to be done at different times. Taking this oversight into account would increase costs. Although a Corps official acknowledged this inaccuracy, the Corps had not, at the time of our review, calculated how much this correction would increase annualized project costs. Corps headquarters officials have stated that this problem could potentially be corrected by modifying the project construction schedule. However, any modification of the schedule would affect the total project cost. Finally, the Corps did not include in its estimate all the capital investments that private companies, such as oil refineries, would need to make to take advantage of the deeper channel. For example, officials at two refineries told us that they would need to build additional on-site storage capacity to take advantage of a deeper channel, but these costs were not included in the cost estimate. While taking this omission into account would likely increase annualized project costs, the Corps had not addressed this issue at the time of our review. Corps headquarters officials stated that they assumed no land-side costs attributable to the proposed project at the time of the 1998 Limited Reevaluation Report. However, these officials further stated that a reanalysis of the project would reconsider the assumption of no land-side costs, in addition to other potential capital investment costs faced by the benefiting firms. There are a number of uncertainties related to project benefits and costs that could impact the economic analysis. Some of the cases of outdated information and invalid assumptions discussed in this report are examples of the uncertainty in forecasting information such as commodity shipments, technological change, and the economic choices of industry. Reanalysis of the project might consider a more careful treatment of such uncertainty. Our review identified additional uncertainties that the Corps has not addressed in its analysis. If and when these uncertainties are resolved, expected benefits and costs could further increase or decrease, thus affecting the project’s economic merits. It is uncertain whether the companies expected to benefit from the project, primarily oil refineries, would undertake the capital improvements necessary to take full advantage of a deeper channel and, if so, whether they would do it in the same time frame as assumed by the Corps. In its economic analysis, the Corps assumed that all potential beneficiaries would perform the work necessary to take advantage of a deeper channel, such as dredging side channels and berthing areas, by the end of the last year of planned construction. However, potential beneficiaries have made few firm commitments to make these capital improvements. An official of one company wrote to the Corps supporting the project, and a public relations official from another responded to a local newspaper saying the company would look favorably on the project. In addition, representatives of several other companies told us they believe the project could benefit them, but because substantial work could be necessary to deepen their berthing areas, retrofit docking areas, or expand storage capacity, they would not make a firm commitment to making these improvements. If any of the benefiting companies did not perform the necessary work, or if they delayed these efforts until after the project was completed, anticipated benefits would be reduced. Corps headquarters officials reaffirmed the Corps’ support of the draft project cooperation agreement, which calls upon the nonfederal sponsor to enter into agreements with the benefiting firms to complete the necessary work in conjunction with the construction of the project. The draft project cooperation agreement provides that the Corps may elect to stop project construction in the absence of such agreements. As discussed earlier, one of the benefits of the deeper channel—included in the Corps’ analysis—is a reduced need for the lightering of crude oil. In fact, the company that provides lightering services in the bay currently estimates that the demand for its services could decrease by a third, from lightering 90 million barrels of crude oil per year to 60 million. The uncertainty involves how this company would react to a reduction in demand for its services. An official of this company told us that the company might reduce the number of lightering vessels operating in the bay from three to two, which could potentially increase the time that vessels might have to wait for lightering services, increase lightering fees, or both. These scenarios would likely decrease the economic benefits of reduced lightering. Another possibility is that the lightering firm could reduce its fees in an effort to maintain demand for its services. In any event, less lightering could reduce gaseous emissions that occur during the lightering process, thus resulting in some environmental benefits. In addition, Federal Principles and Guidelines for Water Resource Agencies call for including project benefits that contribute to national economic development. Yet, it is uncertain whether all of the potential benefits of a 45-foot channel would contribute to national economic development because most of the ships coming into Delaware River ports are foreign-owned. The Corps’ analysis did not take into account the distribution of the project benefits between U.S. and foreign interests; in essence, the Corps assumed that all transportation savings attributable to the project would accrue to U.S. interests. In commenting on a draft of this report, the Corps stated that we are making an implicit assumption that all benefits should accrue to American interests, and those realized by foreign interests should be netted out in the determination of U.S. national economic development. First, we are not making an implicit assumption. The Economic and Environmental Principles for Water and Related Land Resources Implementation Studies—a publication that establishes principles intended to ensure proper and consistent planning by the Corps—and the Corps’ own guidance state: “The Federal objective of water and related land resources project planning is to contribute to national economic development consistent with protecting the Nation’s environment, pursuant to national environmental statutes, applicable executive orders, and other Federal planning requirements.” Second, it is unclear how the Corps can meet that definition of national economic development without analyzing the distribution of project benefits between U.S. and foreign interests. In summary, our concern is that to the extent that some of the transportation cost savings of this project—as well as those for other similar Corps navigation projects—accrue to foreign interests, the contributions of the project to national economic development are overstated. Finally, an area of uncertainty that could increase project benefits is the degree to which there are commodities being shipped on the Delaware River that the Corps did not include in its economic analysis. For example, recent shipping data indicate that imports of iron and steel increased from about 550,000 short tons in 1990 to about 4 million short tons in 2000. The importers of these and other goods might benefit from a deeper channel, but the Corps’ benefit analysis did not consider these commodities. There are several uncertainties regarding project costs. One area of uncertainty involves mitigation costs for any unexpected environmental damage that could potentially emerge. While the Corps has included some costs for assessing the likely environmental impacts of the project, should monitoring or construction activities reveal unanticipated problems, the costs to slow the dredging schedule or rebuild damaged habitat are unclear. In addition, discussed below are several other uncertainties that we identified during our review that may increase or decrease costs by an as yet unquantified amount. One such uncertainty concerns the recent addition of beach replenishment to the project’s plan for disposal of dredged material. The Corps’ current disposal plan calls for transporting sand dredged from the lower Delaware Bay to Broadkill Beach in Delaware. However, it is unclear whether the clean sand will ultimately go to Broadkill Beach because, pending an agreement with the state of Delaware, another beach, or beaches, could be selected. Using a beach that is closer to the dredging area would result in a lower cost for beach replenishment than is currently estimated. Alternatively, if the selected beach were farther from the channel-dredging area, the cost of the operation would be higher than estimated. For example, the current costs of dredging the channel and transporting the sand to Broadkill Beach are estimated at $10 per cubic yard; the costs for the same activities with Dewey-Rehoboth Beach as the destination would be about $18 per cubic yard. Given the uncertainty about which beach or beaches will ultimately be chosen, the final cost of this activity is unclear. It is also uncertain how much purchasing the sites for the three new confined disposal facilities in New Jersey would cost, and whether the project sponsor would be able to acquire all of these sites. Currently, these sites are estimated to cost $15.4 million. The Corps does not intend to update its appraisal of these sites, which would involve estimating the amount of land to be purchased, until after the project cooperation agreement between the Corps and the nonfederal sponsor has been signed. In the meantime, the Gloucester County Improvement Authority of New Jersey is seeking to buy portions of these areas for recreational purposes. Given these uncertainties, it is possible that the costs of land needed for new confined disposal facilities could increase. Another uncertainty concerns how much it would cost the Corps to comply with certain environmental restrictions, called windows. Designed to protect habitat and vulnerable populations such as horseshoe crabs, oysters, and winter flounder in certain sections of the Delaware River, the windows limit where and when dredging, beach replenishment, and wetland restoration activities can occur. For example, to protect the habitat of winter flounder, dredging cannot occur in the lower portion of the river from January through May without relief from the window. If the Corps could not complete its scheduled dredging from June through December, it would incur additional costs to stop the work and start it up again later. The Corps is currently studying the extent to which fish and surrounding habitat would be harmed by dredging activities. A Corps official told us that these studies may show that the current windows are overly protective, a finding that the official believes would provide some support for federal and state agencies to provide relief from some of the restrictions. In addition, the Corps plans to use two dredges in the areas where restrictions are established to reduce dredging time. In any event, the 1998 estimate and its recent update do not include the potentially increased costs of complying with these windows. According to a Corps official, because the Corps is unsure how much relief it would obtain from the restrictions, it is uncertain how much project costs would increase. Corps headquarters officials now state that a significant portion of the project construction work could be accomplished within the existing environmental windows. Specifically, they have said that the operations at Broadkill Beach and Kelly Island would not require relief. However, to the extent that the Corps cannot obtain the necessary relief in other areas, project costs would increase. A further uncertainty concerns whether the Corps will employ the technique known as economic loading for its dredging operations in the lower bay area. Using this technique, the water content of dredged material that has been loaded onto a barge, or dredge, is allowed to drain back into the river at that site. Therefore, when the barge is fully loaded, it contains a higher percentage of dredged material, resulting in fewer trips to the disposal sites. Because of concerns that the water drained from the dredge material would contain a large amount of particulate matter that would cause a plume in the water column, the Corps studied the potential environmental effects of economic loading in 1999. The study concluded that economic loading would not cause significant long-term environmental harm. Having reviewed the results of the study, officials from Delaware and New Jersey said they would consider allowing the use of economic loading in the lower bay. However, it should be noted that this option was not included as part of the Corps’ permit application to the state of Delaware and that formal approvals from either Delaware or New Jersey have not been requested. Should the Corps formally seek and obtain permission to use economic loading, costs would decrease. In commenting on the draft of this report, Corps officials said that indications from the states of New Jersey and Delaware are that this process may be viable and practical in the Delaware Bay for dredging sandy material. The Corps estimated that if economic loading were permitted, it could result in a 30 to 40 percent reduction in the unit cost of dredging, which the Corps stated previously would translate into approximately $2 million in annualized cost savings. However, the Corps recognized that it is uncertain whether economic loading would be used, and that this issue would need to be investigated in any reanalysis of the project. Finally, a new dredging technology, known as a ladder pump, increases dredge material production rates and has the potential to decrease costs for some of the dredging operations that would occur during the construction and maintenance phases of the project. However, the Corps did not incorporate the use of this new technology into its cost estimate, and it is not known whether the contractors that would conduct the project’s dredging operations would use it. The Corps has a three-tiered quality control process designed to ensure that its economic analyses of proposed projects are factually accurate and based on sound economic principles. Three organizational levels are involved: the Corps’ district offices, division offices, and headquarters. In general, for projects such as the Delaware River deepening project, the following process is used: The relevant district office is responsible for conducting a feasibility study that addresses the technical and economic aspects of a proposed project and manages the planning, engineering, and design work that follows. The district office also prepares the Limited Reevaluation Report that updates the technical and economic data as needed. Once it has developed these project justification documents, the district office reviews them for technical accuracy and quality, and upon approval, it forwards them to the division for its review. The division’s responsibility is primarily procedural. It reviews the project justification package to ensure that the district has prepared the required documents such as the Feasibility Study and Limited Reevaluation Report and has obtained all necessary approvals. It does not review such documents for technical accuracy or to verify the underlying analysis. The division ensures that reports such as the Limited Reevaluation Report have undergone a technical review and that the district has issued a quality control certification report with the required district office level approvals. Once the division is satisfied that procedures have been followed, it approves the package and forwards it to headquarters. Headquarters is responsible for ensuring that critical documents such as the Feasibility Study and Limited Reevaluation Report, the major assumptions on which the justification is based, and the recommendations adhere to Corps policy for conducting benefit-cost analyses and environmental studies. Headquarters also ensures that any concerns that it has raised have been addressed. Once headquarters is satisfied that policy has been followed and that the justification is based on sound economics and environmental studies, it approves the project for construction funding. Although the district, division, and headquarters offices approved the project according to the procedures in place in 1992 and changes that followed in 1995, these review processes were ineffective in detecting and correcting the significant miscalculations, invalid assumptions, and outdated information in the economic analysis that our review revealed. For example, we found no indication that problems related to benefits, such as misapplying growth rates, double-counting lightering time savings, and miscalculating potential benefits derived from time savings in unloading crude oil at the refineries, were detected during the internal reviews and quality control certification process. This raises serious questions about the adequacy and effectiveness of the Corps’ review process. Corps headquarters officials have stated that notwithstanding the changing and existing procedures, there were failures in the execution of the process for the development and review of the feasibility analysis and the Limited Reevaluation Report. The economic update in the Limited Reevaluation Report was performed in accordance with existing regulations but did not get to the root of the underlying problems, some of which were carried forth from the original report. Another concern is that since 1995, the primary responsibility for performing the quality reviews of key project documents has been largely delegated to the district office level. The Philadelphia district office prepared the economic analysis and other documents justifying the deepening project and, following the 1995 change in Corps procedures, prepared the 1998 Limited Reevaluation Report and then conducted the technical review and quality control certification process on the report. The fact that the same office that prepared the economic analysis was also responsible for conducting the technical and quality reviews raises questions about the independence of the review process. Similar concerns about the Corps’ project review procedures were addressed in section 216 of the Water Resources Development Act of 2000, which directed the Corps to contract with the National Academy of Sciences to study and make recommendations with regard to the need for independent reviews of Corps feasibility studies. The estimated date of completion for the study is 2003. Looking beyond the Delaware River deepening project, the number and magnitude of problems that were not detected by the Corps’ quality control process raises questions about whether, or to what degree, such oversights might exist for other Corps projects. This concern is shared, at least, to some degree by the Corps of Engineers. Specifically, shortly after we briefed the Corps’ Director of Civil Works on our findings regarding the Delaware River deepening project, he initiated a pause on projects authorized, but not yet under construction, to resolve any questions about the accuracy and currency of the Corps’ economic analyses, the validity of plan formulation decisions, and the rigor of the Corps’ review process. The Corps has largely addressed the likely environmental effects of the project’s dredging operations and dredge material disposal to the satisfaction of federal and state environmental agencies; however, several issues are not yet resolved. On the basis of their review of the Corps’ environmental impact statements and studies of the potential for the project to disturb toxic dredge material, contaminate water, and harm wildlife and habitat, most federal and state agencies granted the Corps the necessary approvals to proceed with the project. A major exception is the Corps’ request for a permit to conduct dredging operations in Delaware waters, which is still pending. In addition, several other issues remain outstanding. With few exceptions, the Corps has obtained the approvals it needs from federal and state environmental agencies to proceed with project construction plans. As required by the National Environmental Policy Act of 1969, the Corps coordinated with other federal agencies and states; obtained comments from the agencies, the states, and the public; and reported on the potential environmental impacts of the project in the 1992 Environmental Impact Statement and the 1997 Supplemental Environmental Impact Statement. The Corps also made some changes as a result of agencies’ comments. For example, in response to concerns raised by the National Marine Fisheries Service and others, the Corps eliminated its proposal to dispose of some dredged material at an underwater sand stockpiling location. On the basis of their review of these and subsequent documents, as well as consultations performed by the Corps, officials from the U.S. Environmental Protection Agency, the National Marine Fisheries Service, the U.S. Geological Survey, and the states of New Jersey and Pennsylvania determined that deepening the Delaware River ship channel would not cause significant long-term harm to the environment. Specifically, these officials told us they were satisfied that the project’s dredging and disposal operations would not degrade water quality, cause saltwater intrusion, release contaminated sediments, or seriously harm endangered or other species. The federal and state approvals were also based on a commitment by the Corps to conduct additional studies and monitor the environmental impact of the ongoing channel deepening and construction of confined disposal facilities. Such monitoring would be central to ensuring that project activities would not degrade water quality, damage groundwater through saltwater intrusion, or harm commercially valuable or vulnerable species. Consequently, the Corps has conducted preconstruction monitoring studies on whether the project would adversely affect oysters and water and sediment quality. In addition, the Corps has studied the likely impact of the project on blue crabs in the lower part of the bay and on winter flounder, horseshoe crabs, and shorebirds at Kelly Island. The Corps has provided the results of these studies to the federal and state environmental agencies for their review, and Corps officials told us that they would continue to monitor these and other environmental issues during and after construction. Federal and state officials told us that should monitoring reveal a problem, the Corps would have to undertake some form of mitigation, such as slowing the dredging schedule or rebuilding damaged habitat. The Corps has not yet obtained a permit from Delaware to conduct dredging operations for the project that affect its waters. The Corps has stated that it will not begin the project until it obtains this permit. Its Philadelphia district office applied for the permit in January 2001 and participated in a public hearing on the application in December 2001. Delaware officials told us that should the state approve the permit application, the permit could include a number of monitoring requirements. For example, Delaware could require the Corps to monitor for possible violations of PCB standards near the dredging zone. As of May 2002, the State of Delaware was still considering the permit application. One remaining issue concerns the possibility that, under certain conditions, the project might cause increased saltwater intrusion into the Delaware River estuary and the groundwater of the area. While Pennsylvania and New Jersey accepted the results of the Corps’ earlier tests for saltwater intrusion, the Delaware River Basin Commission, which sets water quality standards for the Delaware Estuary, requested an additional test. To satisfy the commission’s concerns, the Corps agreed to the test, which it has not yet conducted. In addition, New Jersey officials told us that they would encourage the Delaware River Port Authority to explore alternatives to disposing of dredge material, such as using it for highway construction, before New Jersey would grant water quality certificates for the three confined disposal facilities to be acquired by the Port Authority and built by the Corps in New Jersey. In addition, the Corps and New Jersey’s Department of Environmental Protection have developed a groundwater-monitoring program designed to ensure that existing confined disposal facilities in New Jersey do not harm drinking water. A similar program is planned for the three new confined disposal facilities. Finally, as mentioned earlier, the Corps has not sought formal approval from New Jersey and Delaware for using the economic loading technique. A Corps official told us that the Corps would probably wait until it knows the outcome of its Delaware permit application before deciding whether to seek economic loading approval. Similarly, the Corps has not applied to Delaware or the National Marine Fisheries Service for relief from environmental windows, which restrict when dredging can be performed. However, the Corps is conducting an evaluation of essential fish habitat and is collecting information on the potential effects of the project on horseshoe crabs, shorebirds, and hibernating female blue crabs to determine whether to seek relief from regulatory agencies’ restrictions on dredging. Also, the Corps has not yet obtained a special use permit from the U.S. Fish and Wildlife Service for its planned wetlands restoration at Bombay Hook National Wildlife Refuge. We found significant problems in the Corps’ most recent economic analysis for the Delaware River deepening project. These involved several miscalculations, invalid assumptions, and reliance on outdated information. Consequently, we believe that the Corps’ current project analysis does not provide a reliable basis for deciding whether to proceed with the project. In addition, there are a number of uncertainties about the project that could increase or decrease both benefits and costs. Because of the significance of the problems we identified, the uncertainties that surround the project, and the ineffectiveness of the Corps’ quality control process, the actual economic merits of the Delaware River deepening project will not be reliably known unless and until it is comprehensively reanalyzed. Considering the significant problems we identified with the Corps’ economic justification for the Delaware River project, we recommend that the Secretary of the Army direct the Corps of Engineers to prepare a new and comprehensive economic analysis of the project’s benefits and costs, which includes all aspects of the analysis and corrects for the miscalculations, erroneous assumptions, and outdated information contained in the current analysis; obtain the information, where possible, that is needed to address the uncertainties—such as changing commodity movements over the last decade and alternative dredging techniques—that could significantly affect project benefits and costs; engage an external independent party to review the revised economic analysis to ensure that it accurately and fairly represents the expected benefits and costs of the proposed project; and submit the revised analysis, including the external independent review, to the Congress for its use in considering future appropriation requests for the project. We provided a draft of this report to the Secretary of the Army for review and comment. In response, the Under Secretary of the Army stated that the report is important to the department because it provides a current, critical look at the proposed Delaware River deepening project and identifies legitimate concerns that warrant comprehensive reanalysis. More specifically, the Under Secretary stated that the Corps concurs that a new and comprehensive economic reanalysis of the project’s benefits and costs would be undertaken, and that once the economic reanalysis is complete, an external independent party would be engaged to ensure that it accurately and fairly represents expected benefits and costs of the proposed project. The Under Secretary also provided additional comments on various aspects of the project, which are discussed as appropriate in the body of the report. The full text of the comments is included as appendix II. We conducted our review between July 2001 and May 2002 in accordance with generally accepted government auditing standards. A detailed discussion of our scope and methodology is presented in appendix I. As arranged with your offices, unless you publicly announce this report’s contents earlier, we plan no further distribution of the report until 10 days after the date of this letter. At that time, we will send copies to the Secretary of the Army, appropriate congressional committees, and other interested Members of Congress. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix III. Our review had two main objectives: to determine (1) whether the Corps of Engineers’ economic analysis accurately and appropriately considered the benefits and costs of the project and (2) whether the environmental implications of the project have been fully addressed. To determine whether the Corps of Engineers’ economic analysis appropriately considered the benefits and costs of the Delaware River deepening project, we assessed the extent to which the Corps met requirements and followed accepted practices in estimating the various elements of the benefits and costs, including whether the major assumptions were reasonable and well supported. To determine whether the environmental implications of the project have been fully addressed, we contacted a number of federal and state environmental agencies such as the Environmental Protection Agency, the Delaware River Basin Commission, and the Delaware Department of Natural Resources and Environmental Control. We also obtained information from environmental groups and other interested parties. For both these objectives, we obtained the Corps’ key documents for the project, such as the Interim Final Feasibility Study of 1992, the Design Memorandum of 1996, the Limited Reevaluation Report of 1998, the Environmental Impact Statement of 1992, and the Supplemental Environmental Impact Statement of 1997. We discussed the content and sources of the data in these reports with Corps officials and staff responsible for their preparation and approval. To validate the data and assumptions the Corps used in its analyses, we obtained external data and contacted external parties where appropriate. Where we obtained other analyses or studies, we considered the points raised in these external studies but conducted our own independent review. Where we identified problems with or changes to benefits, costs, or environmental issues, we discussed them with the responsible Corps staff and considered any new data or revisions that they provided. If the problems involved miscalculations, invalid assumptions, errors, omissions, or outdated information that would affect the project’s benefits, costs, or the environment, we attempted to identify how or why these problems occurred. In addition, we identified uncertainties related to the benefits, costs, and the environmental implications of the project and considered whether resolving these uncertainties would increase or decrease the benefits and costs. We also reviewed the Corps’ quality control processes. In the following sections, we provide more detail on our first objective consisting of benefits, costs, uncertainties, and the Corps’ quality control process; and our second objective about the potential environmental implications of the project. To evaluate the Corps’ project benefit analysis, we had three primary objectives. First we used the Corps’ data and methodology—obtained from the 1992 Interim Final Feasibility Report, and the 1996 Design Memorandum, and through interviews with the Corps’ economist— to attempt to replicate the estimated annual project benefits for each commodity as published in the Corps’ 1998 Limited Reevaluation Report. These commodities included coal, containers, crude oil shipped on vessels with sailing drafts greater than 40 feet, crude oil shipped on vessels with sailing drafts less than or equal to 40 feet, iron ore, and scrap metal. Where we were unable to replicate the Corps’ estimates, we met with the Corps’ economist to discuss and resolve the discrepancies. Second, to identify questionable assumptions in the analysis, we examined the data used and calculations applied in the Corps’ benefits programs. To determine whether the Corps’ assumptions were supportable, we requested documentation or guidelines from the Corps’ economist that validated the questioned approach. In addition, we met and talked with industry representatives to obtain their response. Third, to identify whether the analysis was based on up-to-date information, we reviewed the origin of any changes to the benefits estimates in the 1998 Limited Reevaluation Report from the 1992 Interim Final Feasibility Study and the 1996 Design Memorandum. Where no changes in benefits estimates occurred, we searched for data sources available at the time of the Corps’ latest report. Where possible, we updated the information on the basis of historical or industry trends at the time of the 1998 Limited Reevaluation Report. We met with officials from the four companies that own the six oil refineries representing 80 percent of the benefits in the Corps’ analysis, as well as Maritrans Corporation, which conducts the lightering operations for the oil refineries on the Delaware River. We obtained information on commodity shipments up the Delaware River to the Philadelphia region from the Delaware River Port Authority. We also spoke with the Maritime Exchange, which gathers data on ship tracking and reporting on the Delaware River and represents a cross section of interests and companies that depend upon or conduct business on the river. In addition, we met with the firm Rice, Unruh, and Reynolds—shipping agents—to gather information on shipping practices, and with the National Ports and Waterways Institute to gather information about the container shipping industry. We determined the net effect of the miscalculations, invalid assumptions, and outdated information on the Corps’ $40.1 million annual project benefit estimate by applying an eight-step iterative approach. In the first four steps, we corrected for an error in the Corps’ computer program, the misapplication of growth rates, inconsistent discounting, and different year dollar values. For the fifth step, we corrected for the Corps’ invalid assumptions regarding trade route distances and its calculation of average shipping costs. With the sixth and seventh steps, we updated the information used in the Corps’ analysis— specifically, the relative difference between the unloading and lightering rates, and the commodity growth rate information through 1997. In the eighth step, we corrected for the Corps’ incorrect assumption that its statistical model predicted benefits for the 45-foot channel deepening project—when it did not. The net effect of the eight steps was a reduction in the estimated annual project benefits to about $13.3 million (in 1996 dollars). To establish a baseline against which future revisions could be compared for completeness and accuracy, and to get detailed information on planning, engineering, and design study costs, we used the 1996 Design Memorandum. We then compared the 1996 estimate with that in the Limited Reevaluation Report of 1998. Our subsequent efforts focused on changes to the project, various updates to costs by the Corps, corrections we identified, and additional issues that could further affect costs. Where changes in the project had occurred, and where we identified errors or omissions that the Corps agreed to correct, we determined whether the changes or corrections would increase or decrease the annualized project costs. Our review included the two main parts of the Corps’ cost estimate: the construction costs for the main navigation ship channel and the private berthing areas, and the operations and maintenance costs for operating and maintaining a 45-foot channel rather than a 40-foot channel. We also reviewed the Corps’ estimates of costs to construct or modify both new confined disposal facilities and existing confined disposal facilities. Because the Corps makes extensive use of internally developed cost- estimating computer programs, we obtained these programs so that we could replicate construction costs and operations and maintenance costs using the Corps’ programs and methodology. We gained an understanding of the Corps’ Cost Engineering Dredge Estimating Program, which estimates costs for each of the three types of dredging operations used in the project, and the Corps’ Micro Computer-Aided Cost Estimating System, which estimates the costs of constructing elements of the project that require land equipment, such as the confined disposal facilities. We discussed these programs, and the major assumptions and information used in them, with Corps staff in other offices responsible for developing the cost-estimating programs and providing updates for them. To identify a more accurate and updated cost estimate, we took into account changes to the project that had occurred since the 1998 Limited Reevaluation Report and corrections for errors and omissions that we identified during our work. We obtained documentation on the project changes, verified the information, and determined whether any updated cost estimates undertaken by the Corps accurately reflected the changes. For example, where new information existed on the volume of the material to be dredged from the channel, we asked for documentation from the Corps, not only on the volume of material that had been reduced but also on where those reductions had occurred in the channel. When we learned that less of the channel needed to be dredged because side-scan sonar technology provided better information on the areas of the river that were already 45 feet deep, we obtained survey maps from the Corps and verified that the estimated reductions in surface areas that the Corps was using in its revised costs were reasonable. We also identified any costs that were in error, or that were omitted, such as costs to reflect the loss of a disposal site and to transport material to other locations. Since the Corps was updating various cost factors and revising its estimates for changes in the project design and scope at the same time that we were identifying the extent to which costs were accurate, we reestimated the overall project costs using the Corps’ programs and most recent data. We compared our estimate with that of the Corps and obtained agreement with the Corps on a revised annualized project cost estimate that accounted for project changes and corrections that had been made as of the time of our review. During our review, we identified a number of uncertainties related to project benefits and costs that the Corps had not addressed in its economic analysis. In some cases, the uncertainties are linked to decisions that are outside the control of the Corps, while others concern information that may not be currently available. Some of these uncertainties are the result of environmental issues that could affect future project benefits and costs. When we identified an uncertainty, we sought information from Corps officials and others that would allow us to say whether the uncertainty would increase or decrease benefits and costs. We obtained the Corps’ quality control procedures to gain an understanding of its processes and discussed them with Corps officials. We identified the roles and responsibilities of the district, division, and headquarters offices as they related to the Delaware River channel deepening project at the time of the feasibility study in 1992 and any changes in the review processes after that time. In doing so, we obtained copies of technical reviews and the quality control certification for the project, identified the offices responsible for the reviews, and obtained comments that the reviewers had on the economic analysis and the environmental impact statement. We also reviewed the responses of the Philadelphia district staff to determine whether comments by headquarters and the division were taken into consideration in any updated analysis. To determine whether the Corps had considered and analyzed all areas of environmental concern, we reviewed the Corps’ Environmental Impact Statement of 1992, the Supplemental Environmental Impact Statement of 1997, and other Corps studies. We contacted the Environmental Protection Agency, the National Marine Fisheries Service, the U.S. Fish and Wildlife Service, the Delaware River Basin Commission, the U.S. Geological Survey, and environmental agencies in the states of Delaware, New Jersey, and Pennsylvania to discuss the project and obtain studies and documents from them. We also reviewed information provided to us by environmental groups and other interested parties. Where the Corps had tested for contaminated sediments and hazardous materials, and had conducted studies to determine the potential impact of the project on water quality, groundwater, fish and wildlife and their habitat, we reviewed the test data and studies and discussed them with responsible federal and state agencies. Further, we reviewed the Corps’ studies and monitoring plans for identifying any adverse impacts of the project on water quality, groundwater, fish and wildlife, and aquatic habitat with these agencies. For example, to address concerns about contaminated sediments from the dredging operations in the main navigation ship channel, in the private berthing areas of the oil refineries, and at confined disposal facilities, we reviewed sampling data in the Corps’ Supplemental Environmental Impact Statement. We reviewed the type of tests the Corps had conducted and the number of samples and sites selected, and we discussed the tests and results with Corps staff. We contacted officials from the Environmental Protection Agency, the Delaware River Basin Commission, and state environmental agencies in Delaware, New Jersey, and Pennsylvania to determine whether they were satisfied with the test results and the Corps’ monitoring plans for identifying potential problems during and after construction. Additionally, we identified unresolved environmental issues and any outstanding approvals that remain open. For instance, to determine whether and to what extent saltwater intrusion into aquifers from dredging operations was addressed and what the Corps intended to do to resolve any outstanding concerns, we discussed this issue with Corps staff, and officials from the Environmental Protection Agency in Philadelphia and New York City, as well as with officials from the departments of environmental protection in Delaware, New Jersey, and Pennsylvania. We determined how satisfied these officials were with the Corps’ studies and tests. We also met with officials from the Delaware River Basin Commission to discuss their outstanding request for an additional test for saltwater intrusion under certain drought conditions. We followed up with Corps officials to identify what they planned to do to resolve the Delaware River Basin Commission’s concern. In addition to the individual above, Chuck Barchok, Maureen Driscoll, Christopher Murray, Ryan Petitte, Harold Brumm, Richard Johnson, Jay Scott, and Nancy Crothers made key contributions to this report.
|
The U.S. Army Corps of Engineers' February 1992 Final Interim Feasibility Study and Environmental Impact Statement reported that deepening the Delaware River ship channel from 40 to 45 feet was economically justified and environmentally feasible. However, GAO found that it does not provide a reliable basis for deciding whether to proceed with the project. GAO identified several miscalculations, invalid assumptions, and the use of significantly outdated information on the Corps' benefits estimate. In addition, several unresolved issues and uncertainties were not factored into the Corps' economic analysis, the outcome of which could either increase or decrease the benefits and costs of the project. Because of these shortcomings, the actual economic merits of the project will be unclear until the Corps reanalyzes it. The Corps of Engineers has largely addressed the environmental concerns of federal and state environmental agencies. However, several unresolved issues remain, including the issuance of a permit from the state of Delaware governing construction projects that affect state waters.
|
AOC manages and operates CPP to support the agency’s strategic goals and objectives, including stewardship of Capitol facilities and conservation of resources. AOC must also comply with relevant laws and regulations, including environmental-protection and energy-reduction requirements. CPP consists of six main facilities: an administration building, a boiler plant, the West Refrigeration Plant, the West Refrigeration Plant Expansion, the East Refrigeration Plant, and a coal yard at a secondary site (see fig. 1). CPP serves 25 buildings comprising about 17-million square feet, including the U.S. Capitol building, House and Senate office buildings, the Supreme Court, and five buildings not under AOC’s management, including Union Station and the Government Publishing Office. Figure 2 identifies the primary Capitol Complex facilities served by CPP. CPP provides steam to 25 buildings and chilled water to 19 buildings. CPP bills non-AOC customers for its costs under arrangements in various statutes. CPP is a district energy system that generates steam and chilled water for distribution through tunnels and direct buried piping to heat and cool nearby buildings (see fig. 3). Many district energy systems exist throughout the country, often at universities and office parks. In the absence of the district energy system, AOC would likely have to install a more dispersed system, such as heating and cooling generation equipment in each building. Alternatively, AOC could potentially obtain steam and chilled water from another district energy provider, such as the General Services Administration (GSA), to serve some of the buildings in the complex, but could face challenges in doing so. CPP has seven fossil-fuel fired boilers that primarily burn natural gas to generate steam. The boilers operate primarily on natural gas, but AOC can burn coal in two boilers when additional steam capacity is needed or fuel oil in five boilers if, for example, interruptions occurred in the supply of natural gas (see table 1). As we previously reported, CPP increased its use of natural gas over coal and fuel oil beginning in 2008 as a result of the “Green the Capitol’ initiative,” which began at the direction of the House of Representatives. CPP has continued this practice for environmental and other reasons. CPP currently has eight electricity-powered chillers to produce chilled water. AOC officials said CPP has experienced sporadic mechanical and electrical problems with its oldest chillers. AOC has a long-term plan to replace its older chillers, referred to as the Refrigeration Plant Revitalization (RPR) project, which calls for the replacement of several existing chillers and the addition of cooling towers over several phases by 2018. Table 2 provides information on CPP’s chillers in the West Refrigeration Plant and its West Refrigeration Plant Expansion. Since 2008, AOC has implemented many measures to manage the energy-related costs of the buildings served by CPP. AOC’s efforts have reduced the energy needed to cool the buildings in the complex and the energy-related costs of operating CPP have fallen since fiscal year 2011. AOC has additional opportunities to further manage its energy costs. Since 2008, AOC has implemented many measures to manage the energy-related costs of the complex. To reduce the costs of producing steam, AOC replaced some steam- powered water treatment equipment at CPP with new equipment powered by electricity. Specifically, in fiscal year 2014, AOC replaced two of the pumps feeding the plant’s boilers, formerly powered by steam, with new electric pumps. An outside study prepared by a consultant to AOC found that this would reduce in-plant steam use and improve the overall efficiency of the system, resulting in an almost 7 percent decrease in annual fuel costs and a nearly 10 percent improvement in the plant’s steam output. Additionally, AOC officials said they secured better terms in fiscal year 2014 for purchasing natural gas to operate the plant’s boilers. Starting in fiscal year 2014, AOC paid $8.36 per thousand cubic feet of natural gas as opposed to the $12.95 the agency paid in fiscal year 2013, a reduction of approximately 35 percent. The contract expires in 2017. AOC also completed several projects to lower the costs of providing chilled water. AOC officials said that in fiscal year 2012 they began a practice known as “free cooling” at CPP to reduce electricity costs. During winter months, CPP uses outside air, the plant’s cooling towers, and heat exchangers to chill water rather than using its electric chillers. A 2013 study of the chilled water system shows that CPP should be able to meet the majority of chilled water demand in winter months using free cooling, thereby lowering its electricity costs. The study estimated that free cooling would achieve about $307,000 annually in savings through reduced electricity use. Also, in fiscal year 2014, AOC installed new chillers at CPP. The 2013 chilled-water- system audit concluded CPP could produce chilled water more efficiently if it increased its use of two relatively new and efficient chillers located in the East Refrigeration Plant, where the chillers were underused due to the relatively poor condition of the cooling towers there. AOC initially planned to move the two chillers to the West Refrigeration Plant Expansion. Ultimately, AOC purchased and installed two new chillers of similar capacity and efficiency. Additionally, in fiscal year 2014 AOC started construction to add two new chillers and three cooling towers to the West Refrigeration Plant Expansion as part of the RPR project. AOC budget documents state the new chillers will operate 50 percent more efficiently than the older chillers. To better understand energy consumption, AOC installed energy meters at most of the buildings it serves and is installing sub-meters within selected buildings. Energy meters can provide information on the consumption of steam, chilled water, and electricity. According to AOC officials, metering allows the agency to identify changes in energy consumption that could indicate equipment problems, measure progress on energy conservation, assist in identifying future conservation measures, and evaluate energy losses during distribution. Within the last 6 years, AOC installed meters for most of the buildings served by CPP. AOC does not have meters for individual office spaces, but plans to install meters for some energy-intensive spaces, such as kitchens and data centers. According to AOC officials, the agency does not generally track energy use at the occupant level because of the cost and instead encourages energy conservation within offices through education and awareness activities. Select operators of other district energy systems we interviewed specifically mentioned the installation of energy meters to minimize the costs of operating their systems. Some of these operators said they installed meters at individual buildings served by their systems and are considering installing or have already installed submeters where appropriate. In addition, between 2008 and 2013, AOC commissioned energy audits of most of the buildings served by CPP. Energy audits involve examining a building’s physical features and utility history to identify conservation opportunities. AOC officials told us they engaged an engineering company to complete energy audits of the buildings operated by AOC, including the Supreme Court, and Thurgood Marshall buildings at a cost of $5 million. The audits produced estimates of the implementation cost, maximum energy and cost-savings potential, and pay-off period for energy conservation measures in all of the audited buildings. For the 16 largest buildings in the complex administered by AOC, these audits recommended several hundred conservation measures that could result in substantial energy savings. Most of the potential savings could stem from upgrades to heating and cooling systems. Three buildings—the Capitol, Madison Building, and Rayburn House Office Building–account for 52 percent of the potential energy savings from measures recommended by the contractor. Over one-third of the potential energy savings from these recommended measures involve the Library of Congress buildings, with the Madison Building—home of one of the Library’s largest data centers—accounting for the greatest number of recommendations and the highest potential energy savings. For example, the audits estimated that fully replacing heating, ventilation, and air conditioning (HVAC) control systems in the Madison Building could reduce the building’s cooling needs by half, and this project accounted for 18 percent of all potential energy savings from the recommended measures. The contractor estimated that independently implementing all of its recommended measures could cost $115 million and that each measure would eventually result in dollar savings, with the payoff period varying for the different individual measures. As described below, AOC implemented some measures and intends to implement others as resources allow. AOC officials subsequently evaluated the energy audits based on factors such as cost-effectiveness and execution difficulty and approved some measures for implementation. AOC staff and contractors have already implemented some of the measures. For example, AOC staff repaired and optimized some existing HVAC systems. AOC also hired contractors to improve the energy efficiency of the Capitol and House and Senate office buildings through conservation measures. To finance these measures, AOC repays the contractors from avoided costs. Under Energy Savings Performance Contracts (ESPC), federal agencies enter into contracts—up to 25 years—with a private company in which the company incurs the costs of financing and installing energy efficiency improvements in exchange for a share of any savings resulting from the improvements. Table 3 describes the energy conservation measures installed under these contracts. Air handling unit replacement HVAC systems and controls upgrades Lighting retrofit and controls Steam trap maintenance Energy efficient lighting upgrades HVAC and controls upgrades Steam trap replacement Water conservation / fixture upgrades HVAC systems and controls upgrades HVAC testing, adjusting, and balancing Insulation of steam system components Lighting retrofit and controls Transformer upgrades During the contract term, agencies typically continue to budget and request appropriations for energy-related operations and maintenance based on their baseline energy needs prior to implementation of the improvements. Agencies repay the company for the costs—such as initial construction and installation costs, and the company’s borrowing costs and profit—from appropriations using the savings generated by the improvements. The federal statute authorizing federal agencies to enter into ESPCs states that the aggregate annual payments may not exceed the amount the agency would have paid for utilities without an ESPC. At the end of the contract, payments to the company cease and the energy savings may allow agencies to reduce their energy-related expenses. Figure 4 illustrates the potential effect of an ESPC on an agency’s cash flows. We reported in 2004 that although ESPCs provide an alternative funding mechanism for agencies’ energy-efficiency improvements, for the cases we examined at that time, such funding costs more than using upfront appropriations. This is because the federal government can obtain capital at a lower financing rate than private companies. We also reported in June 2005 that vigilance is needed to ensure agencies negotiate the best possible contract terms and that energy savings achieved will cover agencies’ costs. To date, AOC’s contractors report that energy and cost savings have exceeded the guaranteed amounts. In fiscal year 2013, they reported total savings of over $9.8 million. AOC made nearly $8 million in payments to the contractors in 2013, resulting in a net savings of approximately $1.7 million. In September 2012, one of AOC’s contractors refinanced an ESPC project at a projected savings to the agency of $19.8 million over the term of the project. For the entire complex, total steam and chilled-water consumption declined between 2010 and 2013, and adjusting the data to account for yearly changes in weather shows reductions in energy use, mostly from greater efficiency in producing chilled water. Because changes in weather affect the need for steam and chilled water, energy managers evaluate energy consumption against a measure of the average need for heating or cooling services. Cooling and heating degree days measure the number of days with outdoor temperatures above or below, respectively, 65 degrees Fahrenheit and the amount above or below that temperature. For example, a cooling degree day value of 10 indicates that the average temperature for the day was 75 degrees. AOC’s annual energy consumption of chilled water per cooling degree day fell between fiscal years 2010 and 2013, which shows that consumption of chilled water (i.e., cooling) decreased more than would be expected simply due to lower temperatures. AOC’s steam consumption per heating degree day during this period fluctuated. Figure 5 shows AOC’s annual steam and chilled- water consumption per heating and cooling degree days. AOC incurs regularly occurring costs as well as capital costs to operate and maintain CPP. AOC’s regularly occurring costs to operate CPP, which include, among other things, the fuels and electricity to power the plant’s generating equipment and the personnel to operate and maintain them, rose from fiscal year 2009 to fiscal year 2011 and then fell between fiscal years 2012 and 2014. AOC’s costs (expressed as total obligations) to operate CPP were about $59 million in fiscal year 2009, rose to about $69 million in fiscal year 2011, and then fell to about $63 million by fiscal year 2014 (see table 4). From fiscal year 2009 to fiscal year 2014, fuel and electricity accounted for about 46 percent of the costs to operate CPP (in 2015 dollars). AOC’s total obligations on fuel and electricity for CPP rose from about $32 million in fiscal year 2009 to a high of $33 million in fiscal year 2010, before declining in the subsequent years to about $24 million in fiscal year 2014. Changes in a variety of factors can affect CPP’s costs, including fuel and electricity costs, staffing levels, maintenance needs, efficiency in using fuels, and consumption patterns. As shown above, costs for individual line items have varied over time. While AOC has implemented some conservation measures, AOC has additional opportunities to manage its energy-related costs. AOC’s past energy audits identified several hundred additional measures that could further reduce energy consumption in the complex and related costs and are expected to pay for themselves. Of these, AOC has selected some measures it intends to implement when resources become available (see table 5). These include upgrades to building lighting, plumbing, and mechanical systems throughout the complex. For example, such upgrades could include (1) replacing inefficient light fixtures with modern, more-efficient fixtures with occupancy sensors, (2) replacing older inefficient plumbing fixtures with low-flow fixtures with automatic sensors, or (3) replacing pneumatic air-handling controls with more modern, digital controls. The measures AOC selected with the largest projected energy reductions include upgrades to the Library of Congress buildings. AOC officials said they are considering entering into an ESPC for these buildings that would include improvements to lighting and HVAC systems, and infrastructure upgrades to the data center in the Madison Building. Based on a 2009 long-term plan and subsequent partial updates, AOC decided that it should install a cogeneration system to replace aging boilers, meet future demand for steam, and produce electricity. AOC officials said that since upfront appropriations would not likely be available to procure the cogeneration system, they had decided to finance the project. AOC’s iterative planning did not follow key leading practices we identified for federal capital planning. AOC officials said they were unaware of the relevant guidance we cited on leading practices and did not provide documents to support their claims that the agency needed to move quickly to execute a contract for the proposed cogeneration system. In 2009, AOC issued a long-term energy plan that concluded the agency should install a cogeneration system to replace aging boilers, meet future demand for steam, produce electricity, and serve other agency objectives. AOC continued to justify the need to pursue cogeneration in subsequent partial updates to the plan. Cogeneration, also known as combined heat and power, involves the simultaneous production of electricity and heat from a single fuel source, such as natural gas. AOC has proposed a cogeneration system that would use a natural gas combustion turbine to generate electricity and a recovery unit that would use excess heat from the turbine’s exhaust stream to heat water and create steam (see fig. 6). AOC officials stated the cogeneration system, despite initial costs that are significantly higher than other alternatives, will provide needed steam and save money over time by producing electricity to power its chillers— thereby avoiding or decreasing the costs of purchasing electricity. In addition, cogeneration systems can produce excess electricity that can be sold to local utilities, thereby generating income that helps offset the cost of the system. AOC’s 2009 long-term energy plan included a forecast showing that demand for steam would grow and exceed the plant’s capacity to generate steam by fiscal year 2016. To address this projected gap in capacity, the 2009 plan assessed nearly 30 capital alternatives for installing new steam-generating equipment, including natural-gas- powered boilers, a cogeneration system, or nuclear capabilities. The 2009 plan evaluated the capital alternatives using several criteria, including total life cycle costs, initial construction costs, air pollution emissions, energy efficiency, and security. AOC’s 2009 plan recommended that AOC continue to operate CPP as a district energy system to provide heating, and in that context, the best options based on life cycle costs and environmental impacts would involve a new cogeneration system or the use of synthetic coal. Ultimately, citing concerns about the cost and availability of synthetic coal as well as environmental concerns, the plan recommended that AOC procure a cogeneration system. Specifically, the 2009 long-term plan recommended that AOC purchase a cogeneration system comprising one 7.5-megawatt cogeneration combustion turbine, which would represent the first of a three-phase plan. The 2009 plan also called for the installation (in two subsequent phases) of five natural gas boilers along with two other combustion turbines—another 7.5-megawatt turbine and a 15-megawatt turbine—and the equipment needed to distribute electricity throughout the complex. The 2009 plan assumed the first combustion turbine would serve only CPP, but that the later installation of the additional turbines would enable AOC to distribute electricity throughout the complex and potentially allow for selling excess electricity to the local utility. The estimated construction cost for the project was $120 million over its three phases. AOC officials said that it estimated the construction costs in the 2009 plan through a benchmarking analysis and did not reflect an actual bid from a vendor. AOC engaged the National Academies’ National Research Council (NRC) to review a draft of its 2009 long-term energy plan. In response to AOC’s request, the NRC organized an expert panel that identified several shortcomings in the draft plan, including that the energy demand projections were not supported by firm data and did not account for mandates to reduce energy consumption. In the final version of the 2009 plan, AOC states it addressed NRC’s concerns and accounted for both increased utility demand from building renovations and reductions in demand due to the energy reduction mandates. AOC subsequently developed the design of the cogeneration project throughout 2012 and 2013. AOC formally proposed the project during its fiscal year 2012 appropriations hearings. In 2012, AOC also received two consultant-authored reports assessing the feasibility of the system. These reports included an analysis that concluded that the value of a cogeneration system, which AOC officials said represented the first two phases of the 2009 long-term plan, was highly dependent on the price at which AOC could sell the excess electricity generated by the system. Throughout 2013, AOC worked with a vendor to further develop the design of a cogeneration system representing the first two phases of the 2009 plan. In November 2013, AOC officials stated that the project’s initial construction-related costs would total roughly $67 million. The vendor ultimately provided a bid in late 2013 that resulted in a total project cost that was $100 million over AOC’s estimate. As a result, AOC initiated discussions with another vendor in January 2014. On two occasions in 2014, during the course of the audit work for this report, AOC provided GAO with draft plans that concluded a cogeneration system was still the preferred means of meeting steam demand. In July 2014, AOC provided GAO with a draft version of a partial update of the 2009 plan prepared by a consultant, titled Strategic Long Term Energy Plan Update: Draft Final Report, that concluded new steam-generating capacity was needed to replace two aging boilers and meet projected increased future demand for steam. The draft July 2014 partial update included an updated long-term forecast of demand and, unlike the 2009 plan, did not project a gap in steam capacity occurring in 2016. Instead, the draft recommended that AOC replace the capacity of two aging boilers to decrease CPP’s reliance on coal. The draft July 2014 partial update did not, however, describe the expected life of these boilers. Unlike the 2009 document, the draft July 2014 partial update was not comprehensive and reviewed adding new natural gas boilers or eight different configurations of a cogeneration system (which involved combining new gas boilers with the systems). When presenting the draft partial update to GAO in July 2014, AOC officials said that the agency had not accepted the update as final from the consultant and would likely ask the consultant to add information and make changes before doing so. The draft July 2014 update recommended the option with the lowest life cycle costs: that AOC install a natural gas cogeneration system with two 5.7-megawatt turbines, as well as two natural gas boilers providing a total of 190,000 pounds of steam per hour. The draft July 2014 partial update said the electricity generated by the cogeneration system would only be used within CPP and would not serve the rest of the complex or be sold to a utility; CPP does not have the infrastructure to provide electricity to the complex. Because of the low demand for electricity at CPP during winter months—due to relatively low chiller use—the plant would idle one of the two 5.7-megawatt units during peak winter conditions. In the draft July 2014 partial update, AOC’s consultant estimated the initial construction-related costs for the project at $56 million. Later, in December 2014, AOC provided GAO with a draft plan, along with consultant-generated supporting documents, that assessed a choice between a cogeneration system and a single natural gas-boiler. Unlike the 2009 long-term plan and the consultant’s draft July 2014 partial update, the December 2014 draft plan did not include updated long-term forecasts of demand for steam. Instead, the draft plan used one year of demand—calendar year 2013—as the basis for all future years. The December 2014 draft plan stated CPP needed to replace the steam- generating capacity of two of its oldest boilers, citing their age and increasing operations and maintenance costs and recommended that AOC install a natural gas cogeneration system with a single 7.5- megawatt combustion turbine providing a maximum steam capacity of 100,000 pounds per hour. AOC officials stated this would fulfill the first phase of its 2009 long-term energy plan. The December 2014 draft plan stated the electricity generated by the cogeneration system would power CPP’s electric chillers and not serve the rest of the complex. In contrast to the draft July 2014 update, the December 2014 draft plan stated that AOC would sell any excess electricity to the local utility. AOC officials said they expect to use up to 90 percent of the electricity generated by the proposed system to operate the plant’s chillers, thereby avoiding paying for the electricity from the local utility and justifying the system’s relatively large upfront investment (when compared to other alternatives). The agency plans to sell the excess 10 percent of electricity at rates to be determined by a future agreement with the local utility. AOC officials stated this could involve CPP’s becoming a facility qualified to sell electricity to the grid under the Public Utility Regulatory Policies Act (PURPA) of 1978. The officials said they used electricity rates for a qualified facility in the analysis supporting the December 2014 draft plan to use the most conservative approach. AOC officials said they are researching other arrangements for selling the excess electricity that could prove more economically favorable than as a qualified facility under PURPA. Table 6 summarizes some of the key attributes of the recommended options in AOC’s planning since 2009 for meeting future energy needs. AOC officials stated the cost estimates in the December 2014 draft plan reflected two independent cost estimates prepared by consultants and aligned with a bid received in November 2014 from the second vendor, a bid that was closer to the original project budget than the previous bid. AOC informed GAO in December 2014 that the agency desired to execute a contract with the vendor and proceed with construction of the cogeneration system—consisting of one 7.5 MW combustion turbine as described in its December 2014 draft plan. AOC officials said they continued to negotiate the scope of the project, a negotiation that resulted in, among other things, a reduction in the interest rate for financing the project. In March 2015, GAO received updated calculations from AOC reflecting these changes. As of March 2015, AOC had obligated about $16 million on design, preliminary site work, and management of the project. AOC intends to procure the cogeneration system using a utility energy services contract (UESC)—an agreement, similar to ESPCs described previously, in which, in this case, a utility arranges financing to cover the upfront costs of an energy project that a federal agency then repays over the contract term from energy cost savings achieved by the project. Under the UESC, AOC would pay for financing costs, such as interest payments to the utility, in addition to repaying the initial capital costs of the cogeneration project (i.e., construction and other upfront costs) over the contract period (AOC used an analysis period of two years for construction and up to a 25-year contract period). According to our analysis of AOC’s updated data supporting its December 2014 draft plan, the agency would pay about $28 million more in nominal costs under the UESC than if the agency acquired the system using upfront appropriated funds: $16 million more in initial construction costs, due to additional UESC vendor overhead costs, and $12 million more in financing costs over the life of the contract. Under a typical UESC, repayments to the utility reflect the estimated cost savings from the project’s energy efficiency measures. However, under a UESC like AOC has proposed where the utility guarantees performance and not savings, the utility does not guarantee that the project will generate sufficient savings to pay for itself over time. Acquiring the system using an upfront appropriation would cost less than using a third party to finance the project over the proposed 27-year analysis period. However, AOC officials said that since upfront appropriations would likely not be available to procure the cogeneration system, they had decided to pursue the project using a UESC. Because AOC planned to conduct the project without upfront appropriated funds, AOC officials stated they had not assessed the proposed cogeneration project using the agency’s capital-planning prioritization process, by which the agency ranks proposed capital projects and recommends those projects scoring the highest for funding through annual appropriations. As a result, AOC did not analyze the project and its merits relative to other projects using the agency’s pre-determined criteria for capital planning. AOC officials stated that the aforementioned ESPC projects did not go through the agency’s capital planning prioritization process for the same reason. AOC intends to use a UESC under an arrangement established by the General Services Administration (GSA) that could help facilitate the transaction but narrows the number of entities AOC can engage to complete the project. Through its UESC arrangement, GSA has established basic contract terms with select utility companies, and agencies using this arrangement contract with one of these providers. GSA has contracts with two providers in the Washington, D.C., area. While the selection of a UESC vendor is limited to two vendors, AOC officials said that this will not preclude competition as the selected UESC vendor will obtain competitive bids from subcontractors for the construction of the cogeneration system. Based on independent estimates and in alignment with the bid received in November 2014, AOC’s latest data show that a cogeneration system consisting of a 7.5 MW combustion turbine and funded by a UESC would have a total project cost of about $85 million. This includes about $57 million in initial construction-related costs (including contingency funds), another $4 million in agency project management costs, and about $24 million in financing costs. AOC’s data show the project’s life cycle costs as lower than other alternatives, such as a natural gas boiler procured using upfront appropriations. These data also show that the cogeneration system procured using a UESC, AOC’s intended course of action, would result in savings, when compared to a status quo option, of about $7.3 million over 27 years (in today’s dollars) due to the savings achieved by producing its own electricity for the plant. AOC’s data show that the project would repay the UESC vendor in full for the capital and financing costs in 21 years (after the completion of construction and once payments had begun). By comparison, AOC’s data show that a cogeneration system procured with upfront appropriations would achieve savings in today’s dollars of $21.4 million over the analysis period when compared to the status quo option. Further, AOC’s data show a natural gas boiler procured with upfront appropriations for $9.3 million would achieve savings of about $2.7 million over the analysis period when compared to the status quo option. AOC’s calculations on life cycle costs did not reflect the nearly $16 million in funds already obligated for the project. AOC officials said they relied on the National Institute of Standards and Technology (NIST) handbook on life cycle costing for federal energy management programs. AOC officials noted the handbook instructs federal agencies to not include sunk costs when estimating a project’s life cycle costs. Our analysis of AOC’s data suggest that the agency could have procured a natural gas boiler providing the same amount of steam for less than the $16 million the agency has already obligated for the cogeneration project. AOC’s data show a cost of about $9.3 million for procuring such a boiler. AOC officials said they would have had to also obligate funds to prepare the plant for a new boiler, but they did not identify the amount of funds this would have required. Key leading capital-planning practices and other federal guidance we identified state that agencies should, among other things, (1) update their plans in response to changes in their operating environment; (2) fully assess their needs and identify performance gaps; (3) assess a wide range of potential approaches—including non-capital approaches—for meeting those needs; (4) conduct valid sensitivity and uncertainty analyses to identify and quantify the riskiest cost drivers of proposed projects; and (5) engage independent experts when tackling complex issues. However, AOC’s planning that led the agency to pursue a cogeneration system did not follow these key leading practices. Leading organizations generally revise their decision-making process in response to a perception of changing needs or a changing environment. However, AOC did not update its 2009 long-term energy plan until late 2014, did so only partially, and has continued to use the 2009 plan to justify its decision to procure a cogeneration system. In the meantime, major changes have occurred in key assumptions affecting AOC’s plans, such as the price of natural gas and the complex’s demand for steam and chilled water. For example, in part due to increased supplies resulting from the boom in domestic shale gas extraction, prices for natural gas for commercial customers fell by about 20 percent between 2009 and 2012 (when AOC formally proposed the cogeneration project). Furthermore, since publishing its 2009 long-term plan, AOC completed energy audits of its buildings and implemented several energy conservation measures in the complex and reduced the complex’s demand for steam and chilled water. Despite these changes, AOC officials stated they did not believe it was necessary to fully update its 2009 long-term plan to implement the cogeneration system, which they consider to be a single energy conservation measure that addresses a need to replace aging boilers. The officials stated they updated the factors that changed since 2009 that could affect the choice between cogeneration and a natural gas boiler. AOC officials also told us they recognized the importance of fully updating the agency’s long-term energy plan and stated they plan to do so later in fiscal year 2015 after they have made a decision on implementing the proposed cogeneration system. However, by not fully updating its 2009 long-term plan, AOC has continued to pursue a cogeneration system without up-to-date information on a variety of factors, such as the changes in the natural gas markets and the realized impacts of AOC’s demand reduction efforts, that could change the relative merits of the full range of alternatives available to AOC for meeting its long-term needs. Select operators of other district energy system we spoke with stated they regularly conduct planning efforts to identify the needs of their systems, and alternatives to address them. For example, one operator said that although it prepares a strategic plan every 5 years, the operator also updates demand forecasts and conducts other planning as part of its annual budgeting process. AOC did not fully assess its long-term steam needs or identify the performance gap the cogeneration project would address. Leading practices and federal guidance, including the Office of Management and Budget’s (OMB’s) Supplement to OMB Circular A-11 and GAO’s Leading Practices in Capital Decision-Making, state that agencies should comprehensively assess what they need to meet their goals and objectives, identify any gaps between current and needed capabilities (i.e., performance gaps), and explain how a capital project helps the agency address those gaps and meet its goals. However, AOC’s December 2014 draft plan—which the agency has used to justify the current cogeneration project—has not comprehensively assessed the agency’s needs or identified potential performance gaps. Without fully assessing its needs, the agency risks committing to a project that does not fully meet its long-term needs and thereby does not provide the agency with the most efficient use of its funds. Specifically, AOC’s December 2014 draft plan did not forecast the future demand for CPP’s heating and cooling services and instead assumed 2013 levels of demand would continue over the 27-year contract for the cogeneration system. The agency’s 2009 long-term plan included long- term forecasts of steam and chilled water demand showing that future demand for steam would exceed current capabilities. However, the forecast for the 2009 long-term plan is outdated as it does not reflect the realized effects of AOC’s demand management efforts. AOC included long-term forecasts of steam and chilled water demand in its draft July 2014 partial update, but AOC did not finalize it. In addition, the demand forecasts in the 2009 long-term plan and its draft July 2014 partial update may have overstated future needs as they did not fully consider the impact of AOC’s completed and ongoing energy conservation measures and only included factors that would increase overall demand for steam. AOC’s 2009 long-term plan and draft July 2014 partial update assumed demand for steam and chilled water would increase due to future building renovations that would either increase the amount of building space served by CPP or increase the amount of outside air it heats or cools and circulates through buildings. In the 2009 long-term plan, AOC assumed energy reduction efforts would offset these increases. As described above, AOC’s chilled water use has fallen since that time and its steam use has fluctuated. The draft July 2014 partial update specifically states that it did not consider reductions in energy use. The absence of steam demand forecasts in the December 2014 draft plan (1) disregards prior forecasts that are either outdated or were not finalized, (2) ignores the possibility of future changes in demand, and (3) raises questions about the purpose and sizing of the proposed cogeneration system and how it will meet future needs. In explaining why it did not forecast long-term demand for the CPP’s services, AOC officials said new steam-generating capacity was needed—regardless of potential changes in the long-term demand for steam—to decrease the plant’s reliance on two of its older boilers at the end of their service life. AOC’s December 2014 draft plan stated that doing so would thereby allow AOC to avoid the increased maintenance costs associated with operating the boilers infrequently. AOC officials stated that the December 2014 draft plan was intended to compare installing one natural gas boiler with installing one cogeneration system and re-validate the 2009 long-term plan’s recommendation, rather than re‐evaluate all long‐term technical options for meeting steam demand—thereby making it inappropriate to include a long-term forecast of demand. Furthermore, the AOC officials stated that expected future demand that reflects reductions due to AOC’s conservation measures would not reduce demand to anywhere near the point where a boiler replacement is not needed. However, AOC’s December 2014 draft plan that it is using to justify the need and scope of the cogeneration project does not include any such forecasts to support these statements. AOC officials stated the two coal boilers needing replacement are nearly 60 years old and are showing signs of wear. The officials stated the boilers still operate but are unreliable and suffer frequent breakdowns requiring emergency repairs. However, AOC has not provided documents that support these statements. AOC estimated that renovating the boilers, including the addition of currently lacking air-pollution controls, could cost up to $10 million per boiler. However, reports on the condition of the boilers provided by AOC, as well as the agency’s aforementioned planning documents, did not estimate the expected remaining life of the boilers—thereby not assessing whether a performance gap exists and making it unclear how the cogeneration system will meet any long-term needs. Furthermore, AOC’s December 2014 draft plan did not make clear to what extent the proposed system would help AOC avoid the increased maintenance costs associated with continued operation and maintenance of the two older boilers which can operate on coal. AOC officials said in February 2015 that once it had installed the cogeneration system, CPP would keep at least one of the two boilers in reserve to meet peak steam demand. The officials added that the cogeneration system would allow CPP to operate these older boilers on natural gas instead of coal. However, later in its technical comments, AOC noted that CPP would maintain only one of the older boilers for occasional use (decommissioning the other once the cogeneration system is operational). Therefore, AOC will continue to incur maintenance costs associated with continued use of at least one of the two older boilers. AOC’s December 2014 draft plan stated the proposed cogeneration system would enhance the agency’s ability to meet its environmental objectives but stated the system is not needed to meet current EPA emissions standards for hazardous air pollutants. The plan stated CPP can meet promulgated rules limiting emissions of hazardous air pollutants (HAP) from industrial, commercial, and institutional boilers without installing the cogeneration system. Although the cogeneration system would likely increase emissions of certain air pollutants from CPP due to the increased use of natural gas, AOC’s draft plan estimated the system would result in lower regional emissions overall. The electricity generated by the cogeneration system using natural gas would result in relatively fewer emissions than the equivalent amount of electricity purchased from the local utility, which delivers electricity produced predominantly from coal. The December 2014 draft plan states a cogeneration system would result in 14 fewer metric tons of regional HAPs annually, or 18 percent less than a new natural gas boiler providing the same amount of steam. AOC’s draft plan estimates that the cogeneration system will result in lower regional greenhouse gas emissions, although federal regulations for limiting such emissions have not yet taken effect. AOC’s December 2014 draft plan stated a cogeneration system would result in about 15,000 fewer metric tons of regional carbon dioxide emissions per year—7 percent less than a new natural gas-powered boiler, an amount that AOC stated is the equivalent of removing nearly 3,200 vehicles from local roadways each year. Furthermore, the December 2014 draft plan stated meeting the agency’s energy reduction goals did not depend on the cogeneration project. In the plan, AOC stated that “due in large part to the results achieved through the ESPCs and other energy reduction activities, AOC will not require cogeneration to meet the EISA or EPAct requirements at this time.” However, AOC officials said that if Congress renews EISA or EPAct and additional annual energy reduction goals are set for federal agencies, cogeneration may again become key in future AOC energy reduction efforts. AOC’s plans have only considered capital options for meeting its heating needs, and its December 2014 draft plan did not evaluate a range of alternatives. Federal leading planning practices state that capital plans should consider a wide range of alternatives for meeting agency needs, including non-capital alternatives, and evaluate them based on established criteria. GAO’s Executive Guide: Leading Practices in Capital Decision-Making states that managers and decision-makers in successful organizations consider alternatives to investing in new capital projects. Without considering a wide range of options, including non-capital options, AOC may choose a more expensive alternative for meeting its needs. Specifically, AOC’s 2009 plan broadly considered capital alternatives for meeting long-term demand for steam, such as nuclear or geothermal power generation, but did not assess non-capital alternatives for meeting the agency’s objectives, such as implementing operational changes or conservation measures to decrease consumption in the buildings served by CPP. GAO’s capital decision-making guide calls for managers to consider non-capital approaches among the alternatives for meeting an agency need, but AOC’s plan did not explicitly examine such options. As a result, AOC may not have identified the most cost-effective means to heat and cool the complex. As we noted earlier, AOC’s 2014 planning documents assessed a narrower range of capital alternatives—adding a cogeneration system or new natural-gas powered boilers—to meet the demand for steam. AOC’s 2014 plans also envision smaller cogeneration systems that represent a significantly reduced scope from the 2009 plan, which recommended the installation of three turbines in phases to provide power to the entire complex. For example, the December 2014 draft plan recommends a single turbine system that provides electricity to CPP and not the complex The 2014 plans also did not fully take into account AOC’s efforts to reduce the demand for steam through conservation measures in the buildings served by CPP–which may include operational changes or smaller capital investments–on future steam demand. As described above, AOC has installed some conservation measures in the Capitol and House and Senate office buildings and has identified many additional measures that it could implement in the future. The July 2014 plan ignores energy savings from these measures, while the December plan used demand data from 2013 without adjustments for measures implemented since then or in the future. AOC officials stated its latest plan was not meant to fully update the 2009 plan and thereby assess a broad range of alternatives for meeting the agency’s needs. AOC officials stated that the 2014 plan was for replacing current equipment and is consistent with implementing the first phase of the 2009 plan. AOC officials stated they did not believe it was necessary to fully update the 2009 plan to implement a single energy conservation measure that replaces aging boilers—the cogeneration system. AOC officials added that they intend in fiscal year 2015 to fully update the 2009 long-term plan, after the agency has made a decision on implementing the proposed cogeneration project. By only considering a narrow range of alternatives, not accounting for the agency’s ongoing efforts to reduce its steam demand, or fully updating the long-term plan before undertaking a costly and risky project, AOC may be selecting a capital alternative that is not scaled to meet the agency’s long-term needs and therefore could cost more than necessary. AOC did not perform valid sensitivity or uncertainty analyses when assessing the cogeneration system and available alternatives for meeting the agency’s long-term demand for steam. The GAO Cost Estimating Guide calls for agencies, when considering capital projects, to conduct both sensitivity and uncertainty analyses to identify and quantify the cost drivers that pose the most risk of increasing project costs beyond expectations. Sensitivity analysis shows how changes in a key assumption affect the expected cost of a program or project, while holding all other assumptions constant. Uncertainty analysis captures the cumulative effect of various risks on the expected cost of a project by changing many assumptions at the same time. Such information can inform managers about whether their preferred choice remains superior among a group of alternatives. In the case of the proposed cogeneration project, the absence of valid sensitivity and uncertainty analyses makes it unclear whether the project will generate sufficient savings to cover its costs under a range of future conditions—raising questions on whether the project is more cost- effective than other alternatives. Furthermore, should AOC’s projections about the project’s expected savings prove inaccurate, Congress would likely need to appropriate more funds to cover a portion of AOC’s costs to own and operate the system—including the financing costs to be paid to the UESC vendor. Specifically, in its December 2014 draft plan, AOC did not vary a key cost driver when it performed a sensitivity analysis on the expected life cycle costs of the alternatives it considered. When conducting sensitivity analyses, the Cost Estimating Guide calls for agencies to vary the key cost drivers of a project’s life cycle costs, particularly those that are most likely to change over time. The expected life cycle costs of operating either a cogeneration system or a natural gas boiler depends, in part, on the demand for heating and cooling over time. However, as noted above, AOC did not vary demand for heating and cooling in its December 2014 draft plan and instead assumed 2013 levels throughout the forecast period. The Cost Estimating Guide also states that valid sensitivity analyses vary assumptions about key cost drivers in ways that are well-documented, traceable, and based on historical data or another valid basis. However, neither AOC nor a laboratory it engaged presented rationales for their variations of forecasted natural gas and electricity prices from the expected case. In its December 2014 draft plan, AOC varied its assumptions by applying a subjective 25 percent change over the 27-year forecast period. The plan provided no rationale for using 25 percent. In a separate analysis accompanying the December 2014 draft plan, a Department of Energy (DOE) laboratory engaged by AOC presented results of a sensitivity analysis assessing the impact of varying natural gas and electricity prices that varied their initial values. The analysis varied the starting values of both natural gas and electricity prices in a range based on the author’s professional judgment rather than empirical evidence. Furthermore, the analysis did not assess the impact of varying natural gas and electricity prices on the alternatives AOC considered. The Cost Estimating Guide states sensitivity analyses should test the sensitivity of the ranking of considered alternatives to changes in key assumptions. However, the analysis did not assess the potential impact of varying natural gas and electricity prices on the other considered alternative in AOC’s analysis—a natural gas boiler. AOC officials stated the laboratory is an acknowledged expert charged with administration of the federal government’s energy management program. Furthermore, in its December 2014 draft plan AOC relied on DOE forecasts of natural gas and electricity prices in its expected case, but AOC did not use DOE forecasts in its sensitivity analysis. Instead, the agency chose to vary the prices by 25 percent as discussed above. Using AOC’s 25 percent adjustment, instead of available DOE forecasts, to vary future natural gas and electricity prices raises questions about whether the project remains superior to other options under a range of possible outcomes. Specifically, in the Energy Information Agency’s Annual Energy Outlook 2014, DOE created numerous forecasts of natural gas and electricity prices to represent a range of possible future scenarios. When using several of these DOE forecasts, we found the expected savings of the proposed cogeneration project, when compared to other alternatives, changed significantly. Specifically, in AOC’s expected case the project financed using a UESC saves about $4.6 million more over the 27-year period than a boiler acquired with upfront appropriations. Using a DOE scenario where natural gas is more plentiful and prices are lower than in the expected case, however, the cogeneration project becomes less advantageous—saving $1.9 million more than a boiler. Conversely, using a DOE forecast where natural gas is relatively less available and prices are higher over time, the savings of the cogeneration project increases slightly to $5.0 million more than a boiler. In addition to a sensitivity analysis, the Cost Estimating Guide calls for agencies to perform an uncertainty analysis to capture the cumulative effect of various risks on the expected cost of a project. In an uncertainty analysis, project costs should involve a range of possible costs based on a specified probability, known as a confidence interval. Unlike sensitivity analysis, an uncertainty analysis looks at the effects of changing many assumptions at the same time. This involves, among other things, identifying key project cost drivers, modeling various types of uncertainty associated with the cost drivers, and using a simulation method, known as a Monte Carlo analysis. AOC performed an uncertainty analysis on the expected initial construction cost of the project, but did not perform a similar analysis for the life cycle costs of the options it considered. AOC developed an uncertainty analysis on the cogeneration project’s initial construction cost using a Monte Carlo simulation, and agency officials stated this helped them assess the risks that could cause the initial cost of constructing the cogeneration system to exceed the expected level. AOC officials also stated the analysis allowed them to calculate a confidence interval around the expected initial construction cost and therefore budget an appropriate amount of contingency funds. However, AOC did not present its estimates of the project’s savings, derived from its life cycle cost analysis, as a range of possible costs based on a specified probability. Instead, AOC presented a point estimate of the project’s life cycle cost without a confidence interval quantifying the degree of uncertainty. AOC officials said they did not believe an uncertainty analysis was required, based on their understanding of NIST’s handbook on life cycle costs that states uncertainty assessment is more complex and time consuming than sensitivity analysis and therefore the decision for doing so depends on an agency’s judgement of a variety factors, including the relative size of the project, availability of data, and availability of resources such as time, money, and expertise. However, the estimated life cycle cost of the project is determined, in part, on the forecasted prices for key inputs like natural gas and electricity that have historically been highly variable. Without a credible uncertainty analysis, AOC has not presented information on which cost drivers pose the most risk to the project’s life cycle cost. In addition to the capital planning guidance we cite above, our prior work recommends that federal agencies use independent panels of experts for conducting comprehensive, objective reviews of complex issues, such as those facing AOC. As mentioned above, AOC engaged the National Academies’ National Research Council (NRC) to review a draft of its 2009 long-term energy plan and the final version of the 2009 plan stated that it addressed NRC’s recommendations. However, unlike its 2009 plan, AOC has not engaged an independent panel like the NRC to review the subsequent iterations of its planning. AOC officials stated that they did not find it necessary to fully update its long-term plan before executing the contract for the cogeneration system, which the officials stated is a single energy conservation measure intended to replace aging boilers. However, the cogeneration system is relatively complex when compared to available alternatives such as boiler replacement and AOC has obligated about $16 million in design, preliminary site work, and management for the project—an amount that AOC’s data suggests could have procured a new natural gas boiler providing the same amount of steam. Using an independent panel to review AOC’s planning could have provided more assurance that AOC was positioning itself to cost-effectively meet its long-term energy needs. Since issuing its long-term energy plan in 2009, AOC has pursued an iterative planning approach without fully updating the long-term plan or following key leading practices. AOC officials said they were generally unaware of the applicability of the leading practices we cited. AOC officials said they instead relied on other sources of federal guidance, such as NIST’s handbook on determining the life cycle costs of energy conservation projects or DOE’s guidance for using UESCs to finance such projects, an approach that led them to believe that it was unnecessary to fully update the long-term energy plan before executing a contract for the cogeneration project since its intent is to replace aging boilers. However, the guidance AOC cited generally applies after an agency has conducted a needs assessment and conducted a capital- planning process using GAO, OMB, and other relevant guidance cited above. Thus, the guidance AOC officials said they followed does not substitute for first completing an up-to-date capital plan. Without following key leading capital practices, AOC’s planning could commit the agency to a project that does not fully and cost-effectively meet its needs—thereby not providing taxpayers or Congress with the most efficient use of funds in a time when the federal government faces significant financial challenges. In August 2014, we discussed with AOC shortcomings in its planning for the cogeneration project relative to leading practices and referred the agency to documents outlining these practices. AOC officials then provided the aforementioned set of planning documents in December 2014 that the agency stated were intended to address our concerns. AOC officials also provided several reasons why they needed to continue planning the project and quickly execute a contract. These included (1) that certain existing boilers were near the end of their useful life and that AOC might face challenges meeting demand for steam in the near future, and (2) that AOC needed to start construction soon or the Washington, D.C. government would retract the project’s construction and air quality permits. Our review did not identify valid support for these claims. Reports on the condition of the boilers provided by AOC did not identify the remaining useful life of the two boilers in question. Additionally, AOC did not provide documents supporting its statement that the permits for the project were at risk; AOC officials told us they believed the planning steps the agency had taken would be sufficient to keep the permits in effect. AOC has implemented many measures to manage the costs of heating and cooling the Capitol Complex and has achieved measurable results. The agency has additional opportunities to manage these costs through conservation. AOC and its contractors have identified hundreds of additional energy conservation measures, and the agency intends to act on some of them when resources become available. Related to this, AOC’s planning to evaluate the relative merits of the currently proposed cogeneration project has not followed key leading practices identified in OMB, GAO, and other relevant capital-planning guidance. These include not (1) fully updating the agency’s 2009 long- term energy plan to reflect changes in energy costs and demand that occurred since the plan was issued; (2) fully assessing long-term energy needs or the performance gap the project would address in light of changes in key variables that could affect its relative merits; (3) identifying a full range of alternatives for meeting future needs, including non-capital or conservation measures; (4) conducting valid sensitivity or uncertainty analyses; or (5) engaging an independent panel of experts to review AOC’s updates of its long-term plan. AOC officials said they were unaware of some of these leading practices and therefore did not follow them. AOC’s planning was insufficient for us to discern whether the cogeneration project would generate enough savings to cover its costs or prove more cost-effective than other options for meeting the agency’s needs. Thus, without addressing the shortcomings listed above, AOC’s planning does not provide confidence that the proposed project will decrease the need for future energy-related appropriations. GAO is making two recommendations to the Architect of the Capitol. We recommend that the Architect of the Capitol, prior to undertaking future major capital projects related to its energy needs, fully update its long-term energy plan while following key leading capital-planning practices. As part of this effort, the agency should: fully assess the complex’s long-term needs and identify any performance gaps, while taking into account the effects of possible changes in demand—including the impacts of ongoing and planned energy conservation measures and other factors that could affect the demand for CPP’s services; identify and evaluate a range of alternatives for how to best meet the agency’s needs, including non-capital options and energy conservation measures that could reduce the demand for CPP’s services; and identify key assumptions and risks of the alternatives considered and perform valid sensitivity and uncertainty analyses to determine which alternatives could prove the most cost-effective under a range of potential future conditions. As AOC updates its long-term energy plan, the Architect should seek a review of the plan by an independent panel of experts to ensure it follows key leading practices and provide the results of the review to Congress. We provided a draft of this report to the AOC for review and comment. In its written comments, included as appendix II, the Architect disagreed with our findings, conclusions, and recommendations. However, AOC also said that the agency has effectively implemented our recommendations in a “manner sufficient to move forward with the planned cogeneration project.” As we discuss below, AOC provided two new reports focusing on the need to replace its oldest boilers and potential risks and costs associated with the proposed cogeneration project. We did not review these reports because AOC did not provide them or make us aware of them until after we had completed our work. We plan to review these studies in the future and discuss them with Congress. While these reports may expand on the justification for the cogeneration project, we continue to believe that AOC should first update its overall long-term strategic energy plan and evaluate a full range of alternatives for best meeting its needs prior to undertaking major energy projects in the future. We also acknowledge that AOC may need to replace certain steam-generating equipment, in part or in whole, at some point in the future. AOC also provided technical comments, which we addressed as appropriate in the report. In its written comments, AOC stated that contrary to our recommendations and assertions in the draft report, AOC adhered to key leading capital-planning practices based on its 2009 long-term energy plan, 2014 revalidation efforts, and additional documentation. AOC’s written comments contradict statements by AOC officials in April 2015 that they were not aware of the key leading capital-planning practices cited in our draft report. At that time, these officials said that AOC instead followed NIST guidance on performing life-cycle cost analyses for energy conservation projects and DOE guidance for financing energy projects using non-appropriated funds. Furthermore, the agency did not provide evidence that contradicted our finding about it not adhering to these practices during our review. We therefore maintain that we reached the correct conclusion about AOC’s adherence to key leading capital- planning practices. As part of our first recommendation, we said that AOC should fully assess the complex’s long-term needs and identify any performance gaps. As part of its written comments, AOC provided additional documentation that the agency said fully explains how the agency has already assessed these needs through preparing a justification for replacing the complex’s aging boilers. The documentation expands on its efforts to support the proposed cogeneration project, including a report on the condition of two of its oldest boilers and an updated sensitivity analysis comparing the long-term benefits of installing new boilers or a cogeneration system. We did not assess the validity of these documents because AOC did not provide them or make us aware of them until after we had sent the draft report for comment. Moreover, AOC did not use this information as part of the basis for selecting the current planned cogeneration project. We maintain that AOC should conduct such an analysis prior to making a decision about energy projects, rather than as part of efforts to validate decisions made in 2009 and 2014. Another part of our first recommendation said that AOC should identify and evaluate a range of alternatives for how to best meet the agency’s needs, and identify key assumptions and risks of the alternatives. Regarding identifying and evaluating a range of alternatives, including non-capital options and energy conservation measures, AOC said that it did so in 2009 and selected cogeneration to replace the aging boilers. AOC added that it updated key assumptions used in the 2009 plan in 2014 and further evaluated the two technically feasible options—natural gas boilers and cogeneration—in extensive detail, which AOC stated validated that cogeneration remained the best option. We agree that the 2009 long-term energy plan broadly considered a range of alternatives for meeting the agency’s long-term energy needs, but the analysis conducted in 2014 focused solely on two options. From 2009 to the present, many factors have changed that could potentially lead AOC to reach a different, more cost-effective solution to meet any future performance gaps. For example, the costs of fuels, electricity, and labor have changed since 2009. In addition, the demand for AOC’s services has changed as the agency has pursued conservation and other energy-saving efforts. We therefore continue to believe that AOC should fully update its long-term energy plan, taking into account changes in key variables and the full range of options for how best to meet the agency’s needs, including non- capital options and energy conservation measures. The last part of our first recommendation said that AOC should identify key assumptions and risks and perform valid sensitivity and uncertainty analyses to identify cost-effective alternatives under a range of future scenarios. In its written comments, AOC said that it identified key assumptions and risks and subsequently performed valid sensitivity and uncertainty analyses. The Department of Energy’s National Renewable Energy Laboratory (NREL), as a third-party reviewer of the cogeneration validation effort, conducted a deterministic sensitivity analysis of the cogeneration project’s life-cycle cost, and AOC performed its own sensitivity analysis in its December 2014 draft plan. Our report identified shortcomings of these analyses, raising questions about their usefulness in identifying a cost-effective alternative. AOC also used a different third party to perform a probabilistic risk assessment of the project’s construction cost, which we acknowledged in our report. In addition, AOC said the agency also used another third party to complete an additional probabilistic risk assessment of the project’s life-cycle cost in May 2015. We did not assess the validity of this analysis because AOC did not provide it to us until after we had sent the draft report for comment. While AOC has conducted some sensitivity and uncertainty analyses, it did so to support a decision made in 2009, rather than to evaluate alternatives in the context of a full update of its long-term energy plan. We, therefore, continue to believe that AOC should fully update its long-term energy plan and follow leading practices for analyzing alternatives in that context. Our second recommendation states that, as AOC updates its long-term energy plan, the Architect should seek an independent review of the plan by an expert panel to ensure it follows key leading practices and provide the results of the review to Congress. In its written comments, AOC stated that it had engaged an outside entity to review AOC’s 2014 effort to validate its choice to pursue a cogeneration project. However, a review of a partial update to a 2009 plan does not address our recommendation that AOC fully update its long-term energy plan and then seek outside review by an independent panel of experts, as it did in 2009. AOC’s written comments included additional details about its disagreement with our findings, conclusions, and recommendations, which we address in appendix II. We are sending copies of this report to the appropriate congressional committees, the Architect of the Capitol, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Frank Rusco at (202) 512-3841 or [email protected] or Lori Rectanus at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributors to this report are listed in appendix III. Our work for this report focused on the Architect of the Capitol’s (AOC) Capitol Power Plant (CPP) and actions taken by AOC to manage the costs of providing heating and cooling services to the complex. In particular, this report examines: (1) measures AOC implemented since GAO’s 2008 report to manage the energy-related costs of the buildings served by CPP and opportunities, if any, to further manage these costs, and (2) how AOC decided to procure a cogeneration system and the extent to which AOC followed leading capital-planning practices. To identify measures AOC has implemented since 2008 to manage energy-related costs, we examined AOC and CPP appropriations, obligations, and expenditures data from 2009 to 2013 to identify the costs incurred by AOC related to production, distribution, and consumption of heating, cooling, and electricity by the complex. We assessed the reliability of these data—for example, by reviewing related documentation and interviewing knowledgeable AOC budget and finance officials—and found them sufficiently reliable for our reporting purposes. We also reviewed relevant AOC reports and documents, and interviewed AOC and CPP officials. To identify measures AOC could potentially implement to further manage its energy-related costs, we reviewed AOC reports and other documents, such as energy audits of CPP’s steam and chilled water systems. We assessed the reliability of the data in these audits by reviewing related documentation and interviewing knowledgeable AOC officials and found these data sufficiently reliable for our reporting purposes. We also interviewed eight operators of other district energy systems to learn about measures they have implemented to manage costs, as well as the benefits and costs associated with those measures. We identified these operators based on, among other things, our preliminary research; interviews with CPP staff and managers of other district energy systems; we selected the operators based on similarities to the CPP, such as whether the operators were located in climates similar to Washington, D.C. We selected eight operators: five in the Washington, D.C., area and three in the Boston, Massachusetts, area. Four of the operators are public entities and the remaining four are private, two of which are private universities (see table 7). The information collected during these interviews cannot be generalized to all district heating or cooling systems. To review AOC’s planning effort to further manage its energy-related costs, we reviewed AOC’s planning documents and recent updates, including (1) AOC’s 2009 Strategic Long-Term Energy Plan, (2) AOC’s draft Strategic Long-Term Energy Plan released in the summer of 2014, and (3) AOC’s draft Cogeneration at Capitol Power Plant Project Summary and accompanying consultant reports issued in December 2014. We identified four sources of federal guidance on capital planning and alternatives analysis and compared the guidance in those documents to AOC’s planning documents. We also interviewed AOC officials to discuss the agency’s planning documents and efforts. We conducted our work from December 2013 to September 2015 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this product. Comment 1: We agree that CPP has equipment that may need replacement, in part or in whole, at some point in the future. However, AOC has not provided information on the likelihood of any such failures. After we provided our draft report to AOC for comment, the agency provided a new report on justifying the replacement of some of its older boilers, dated July 17, 2015, that provides anecdotes on problems AOC has overcome in maintaining the boilers but did not provide information quantifying the operational or budget impacts of these problems or estimates of the likelihood of a sudden failure of the boilers in the near future. Furthermore, AOC has not provided us with information—other than condition reports we reviewed finding that the boilers were in good to fair condition for their ages—supporting AOC’s claims that the boilers are effectively “on life support.” Comment 2: We agree that AOC should operate and maintain CPP with the goal of meeting peak steam demand. However, AOC has not quantified any negative effects that would occur if CPP had to meet peak steam demand while operating its boilers only on natural gas and experiencing a temporary boiler outage. Furthermore, as AOC has noted, the proposed cogeneration system would not provide enough steam to allow AOC to meet its peak steam demand without using one of the two older boilers it intends to replace. Therefore, AOC will continue to incur some of the increased costs associated with infrequent use of one of the two older boilers that the agency stated the cogeneration project was meant to address. Furthermore, it is not clear when the agency intends to fully replace the capacity of the two oldest boilers. We therefore continue to believe that AOC should fully update its long-term energy plan while following leading capital-planning practices to ensure the agency fully assesses its needs and finds the most cost-effective ways to meet them. Comment 3: We agree that AOC’s 2009 long-term energy plan assessed a broad range of technical options for providing heating and cooling to the complex. However, given that many factors have changed that could potentially lead AOC to reach a different, more cost-effective solution to meet any future performance gaps, we continue to recommend that AOC fully update its long-term energy plan while following key leading capital-planning practices and seek an independent review of the plan and provide the results of this review to Congress. In its letter, AOC noted that the NRC committee that reviewed its 2009 plan stated that “electric generation (or Cogeneration) is the best long-term strategy for AOC to achieve its mission of reliable, cost-effective, efficient, and environmentally sound utility services.” However, we did not find this statement in the NRC committee’s 2009 report; instead, it is an AOC statement included in its final 2009 long-term energy plan. Comment 4: AOC sought to clarify the progression of its planning efforts, which we summarized in Table 6 in our report. However, it is unclear why AOC stated that we mischaracterized its July 2014 Strategic Long Term Energy Plan Update: Draft Final Report, which we described as a draft plan throughout our report. In August 2014, we discussed with AOC shortcomings in its planning for the cogeneration project relative to leading practices and referred the agency to documents outlining these practices. AOC officials later wrote that the agency addressed the presented shortcomings by completing the December 2014 draft plan and supporting documents, which called for a cogeneration system with a configuration that differed from the July 2014 draft plan. Comment 5: AOC stated that its 2014 revalidation addressed the key leading capital-planning practices we cited, but this revalidation focused on two technical options and did not, as called for in leading practices, fully assess the complex’s long-term needs and identify and evaluate a full range of options for best meeting those needs. We continue to maintain that, prior to undertaking major energy projects, AOC should fully update its 2009 long-term energy plan as called for in leading capital-planning practices, given that key factors have changed that could have changed the plan’s conclusions. Comment 6: AOC stated that it completed an evaluation and redeveloped its long- term steam demand forecasts to address the urgent need to replace its older coal- firing boilers. We did not assess the validity of this evaluation because AOC did not provide it, or make us aware of it, until after we had sent the draft report to the agency for its comments. This evaluation did not accompany the agency’s December 2014 draft plan, which AOC used to justify the need for and scope of the proposed cogeneration project. Comment 7: We agree that AOC reviewed a broad range of options for meeting its long-term needs in its 2009 long-term energy plan. However, AOC did not examine non-capital options in the 2009 plan—such as operational changes or conservation measures—and it is unclear how or when AOC assessed some of the capital or financing options it cited in its written comments. Since 2009, AOC has assessed two capital options—a cogeneration system or a natural gas boiler. From 2009 to the present, many factors have changed that could potentially lead AOC to reach a different, more cost-effective solution to meet its needs. Therefore, we continue to believe that AOC should identify and assess a wide range of options for meeting its needs in a full update of its long-term energy plan. Comment 8: We have not assessed AOC’s additional sensitivity analysis, as the agency provided it after we had completed our draft report. We do not know the basis for AOC's statement that the group of energy conservation measures it identified would reduce the complex’s steam demand by 20 percent or the basis for the statement that the cost of the measures—including some or all of the costs of the Cannon House Office Building Renewal project—would exceed $2 billion. Comment 9: AOC disagreed with our statement that the agency did not update its 2009 long-term plan in response to changes in key assumptions, citing the analyses it performed in 2014 and 2015 on the life cycle costs of the proposed cogeneration system and an alternative of a natural gas boiler. However, AOC did not update the key assumptions in the context of a full update of its 2009 plan, which assessed a broad range of options for meeting the complex’s heating and cooling needs. AOC stated that it included updated assumptions in its spreadsheets on the life cycle costs of the proposed cogeneration project and a natural gas boiler alternative, and stated that we declined its offers to discuss these spreadsheets. However, we reviewed these spreadsheets containing AOC’s life cycle cost analyses and identified shortcomings that we describe in our report. Comment 10: AOC stated that it completed a probabilistic risk assessment in May 2015 that was consistent with GAO’s Cost Estimating Guide, which identifies some key leading capital-planning practices. However, AOC did not make us aware of or provide this assessment until after we had completed our review and prepared our draft report. Comment 11: AOC stated that the Department of Energy’s National Renewable Energy Laboratory (NREL) provided an independent review of its December 2014 draft plan, which compared the proposed cogeneration system to an alternative of a natural gas boiler. NREL’s review of a partial update to a 2009 plan, rather than a full update, does not address our recommendation. AOC needs to fully update its long- term energy plan and then seek outside review by an independent panel of experts, as it did in 2009. Comment 12: We agree that cogeneration can offer benefits in certain settings. However, given the significantly higher upfront costs of cogeneration when compared to alternatives like a natural gas boiler, it is important that the planning involved in selecting the technology over viable alternatives exhibit the aspects of key leading capital-planning practices we cited—such as fully assessing needs, assessing a range of alternatives, and using valid sensitivity and uncertainty analyses to identify key risks and confirm the superiority of a chosen option over its alternatives. To ensure that AOC’s choices for meeting its long-term energy needs result from planning that exhibits these leading practices, we continue to believe that AOC should fully update its long-term energy plan while following the key leading practices we cited. Comment 13: AOC stated that the construction permit for the proposed cogeneration project will expire in June 2016 and that fully implementing our recommendations would introduce a delay of approximately two years to either option for obtaining additional steam generating capacity. We maintain it is important for AOC to make the correct decisions about its capital and long-term energy needs through planning that follows key leading capital-planning practices, regardless of when any permits may expire for a particular project. Furthermore, AOC did not provide a basis for its claim that fully updating its long-term energy plan would cause a delay of an additional two years to either option for adding new steam generating capacity, and if AOC’s claim is accurate then the agency should start the update as expeditiously as possible. Therefore, we continue to recommend that AOC fully update its long-term energy plan while following leading capital-planning practices before undertaking future major capital projects related to its energy needs. Comment 14: We agree that AOC faces limits on its continued use of coal at CPP and on its emission of air pollutants, and we believe AOC should factor in such constraints in a full update of its long-term energy plan. Comment 15: AOC stated in its letter that our report suggested that capital-planning guidance is clear and leaves no room for misunderstanding or misinterpretation by agencies. During the course of our review, and after receiving a preview of our report’s findings, AOC officials said they were generally unaware of the applicability of the leading practices we cited. We identify in our report GAO’s prior work that recommends the use of independent panels by agencies when addressing complex issues such as those facing AOC, and as the agency itself used in 2009 to review its draft long-term energy plan. As part of fulfilling our recommendation that the agency fully update its long-term energy plan while following leading capital-planning practices, we continue to believe AOC should submit the plan for review by an independent panel of experts and submit the results to Congress. Comment 16: AOC did not assess the proposed cogeneration project using its capital planning prioritization process for projects to be funded with upfront appropriations, stating that it is the agency’s strategy to use a UESC to finance the proposed cogeneration project—thereby allowing AOC to request appropriations to fund other critical infrastructure projects for which AOC stated such alternative funding sources are not available. As we stated in our report, by not assessing the proposed project using the agency’s capital planning prioritization, AOC did not analyze the project relative to other projects for which the AOC was seeking appropriated funding using the agency’s pre-determined criteria for capital planning. Comment 17: We agree that, like the proposed cogeneration project, AOC would have incurred some pre-construction obligations for design and project management to replace the steam-generating capacity of one or both of its older coal-firing boilers with a natural gas boiler. AOC’s draft December 2014 plan shows that a natural gas boiler providing the same amount of steam as the proposed cogeneration system would cost approximately $9.3 million. It is not clear to what extent this estimate includes pre-construction obligations, which for the cogeneration project totaled about $16 million as of March 2015. Comment 18: We agree that CPP may not be able maintain adequate capacity to meet peak demand should both older coal-firing boilers fail at the same time, but this does not change the need for AOC to fully assess its long-term energy needs and evaluate a range of alternatives for meeting them in the context of a full update of its long-term energy plan. Comment 19: AOC officials stated appropriations would likely not be available for the cogeneration project and therefore selected a UESC to finance the project. Because the agency did not intend to use upfront appropriations to acquire the system, AOC did not assess the project using its capital planning prioritization process. As we reported, acquiring the system using a UESC results in more upfront costs and financing costs than if the agency used upfront appropriations. AOC stated that it discussed its funding challenges with GAO, but it is not GAO’s role to advise agencies as they seek funding for their proposed capital projects. Comment 20: AOC stated that its selection of the proposed cogeneration project and its revalidation efforts have followed key leading practices. However, as we state in our report and our response, we remain unconvinced that AOC’s planning followed key leading capital-planning practices and therefore AOC has not demonstrated whether the proposed cogeneration project will prove more cost-effective than other alternatives for meeting the agency’s needs. We therefore continue to recommend that AOC, prior to undertaking major energy projects, fully update its 2009 long-term energy plan while following key leading capital-planning practices, including: fully assessing its energy needs, identifying and evaluating a range of alternatives for meeting its needs, and identifying key assumptions and risks and performing valid sensitivity and uncertainty analyses. We also continue to recommend, given the complexity of the issues it is facing, that AOC seek a review by an independent panel of experts as it fully updates its long-term energy plan and provide the results of this review to Congress. In addition to the individuals names above, Michael Armes (Assistant Director); Michael Hix (Assistant Director); John Delicath; Philip Farah; Cindy Gilbert; Geoff Hamilton; Dan Paepke; Mick Ray; and Shep Ryen made key contributions to this report.
|
AOC's CPP heats and cools 25 buildings in the complex, including the Capitol and House and Senate office buildings. CPP does not have the infrastructure to distribute electricity to the buildings it serves. CPP buys fossil fuels (mostly natural gas) to run boilers that make steam and buys electricity to run chillers that make chilled water. CPP distributes the steam and chilled water for heating and cooling using a network of tunnels. AOC seeks to install a ‘cogeneration' system that would produce steam and electricity. The House of Representatives report accompanying the Legislative Branch Appropriations Bill, 2014 included a provision for GAO to analyze potential cost savings at CPP. GAO analyzed (1) measures AOC implemented since 2008 to manage the energy-related costs of the complex and opportunities, if any, to further manage these costs, and (2) how AOC decided to procure a cogeneration system and the extent to which AOC followed leading capital- planning practices. GAO analyzed AOC budgets and plans; reviewed federal guidance on capital planning; and interviewed AOC staff and other stakeholders, including other heating and cooling plant operators. The Architect of the Capitol (AOC) implemented many measures since 2008 to manage the energy-related costs of the Capitol Complex (the complex) and has opportunities to further manage these costs. AOC updated some of the Capitol Power Plant's (CPP's) production and distribution systems to reduce energy use and increase efficiency. AOC also implemented measures to reduce energy consumption in the complex, such as conservation projects improving lighting and air-handling systems that yielded monetary savings. AOC has opportunities to implement other conservation measures in the complex. For example, energy audits by contractors identified additional opportunities to implement similar measures or other upgrades to lighting, mechanical, and plumbing systems to achieve additional energy and monetary savings. However, AOC officials said they have not implemented these measures but intend to act as resources become available. AOC decided to procure a cogeneration system to produce electricity and steam based on a 2009 long-term plan and subsequent partial updates but did not follow key leading federal capital-planning practices. In 2009, AOC issued a long-term energy plan that stated it should pursue cogeneration to meet future steam demand and provide a new source of electricity for its chillers, enabling the agency to decrease electricity purchases. Partial updates to the plan in 2014 sought to justify the choice of a cogeneration system. However, AOC's planning did not follow key leading capital-planning practices developed by GAO and the Office of Management and Budget (OMB). First, though called for by leading federal planning practices, AOC has not fully updated the 2009 long-term plan, although changes in key planning assumptions, such as on fuel prices and the complex's demand for energy, have occurred. Instead, AOC intends to make a decision on implementing an $85 million cogeneration system before updating its long-term plan later in fiscal year 2015. Second, the 2014 partial updates to its 2009 plan that AOC has used to justify the project did not include complete information on the need or problem that the project would address. Third, the 2014 updates did not identify a full range of options for cost-effectively meeting projected future needs, including non-capital measures such as conservation. Fourth, the updates did not have valid sensitivity or uncertainty analyses to test key assumptions about whether the system would achieve sufficient savings over time—from decreased electricity purchases—to justify its costs. Related to this, AOC officials said that since upfront appropriations would likely not be available to procure the system, they had decided to use a third party to finance the project, thereby increasing its costs. These officials also said they relied on federal guidance for analyzing and financing energy projects. However, such guidance does not substitute for first completing an up-to-date capital plan. Finally, GAO's prior work has recommended using independent panels of experts to review complex projects such as a cogeneration system, but AOC has not engaged such a panel to review its 2014 updates to its long-term plan. AOC officials said they were unaware of some of these practices and that they needed to sign a contract quickly to avoid the risk of losing construction and air quality permits. Without updating its long-term energy plan and obtaining independent review, AOC may pursue a project that does not cost-effectively meet its needs. AOC should (1) update its long-term energy plan while following key leading practices, including considering a full range of measures to further manage costs, before committing to major energy projects at CPP, and (2) seek independent review of its plan. AOC disagreed with GAO's recommendations; GAO continues to believe they are valid, as discussed further in this report.
|
Federal agencies, including DOD, can choose among numerous contract types to acquire products and services. One of the characteristics that vary across contract types is the amount and nature of the fee that agencies offer to the contractor for achieving or exceeding specified objectives or goals. Of all the contract types available, only award- and incentive-fee contracts allow an agency to adjust the amount of fee paid to contractors based on the contractor’s performance. Federal acquisition regulations state that award- and incentive-fee contracts should be used to achieve specific acquisition objectives, such as delivering products and services on time or within cost goals and with the promised capabilities. For award-fee contracts, the assumption underlying the regulation is that the likelihood of meeting these acquisition objectives will be enhanced by using a contract that effectively motivates the contractor toward exceptional performance. Typically, award-fee contracts emphasize multiple aspects of contractor performance in a wide variety of areas, such as quality, timeliness, technical ingenuity, and cost-effective management. These areas are susceptible to judgmental and qualitative measurement and evaluation, and as a result, award-fee criteria and evaluations tend to be subjective. Table 1 provides a description of the general process for evaluating the contractor and determining the amount of award fee earned. From fiscal year 1999 through fiscal year 2003, award- and incentive-fee contract actions accounted for 4.6 percent of all DOD contract actions over $25,000. However, when taking into account the dollars obligated— award- and incentive-fee contract actions accounted for 20.6 percent of the dollars obligated on actions over $25,000, or over $157 billion, as shown in figure 1. Our sample of 93 contracts includes $51.6 billion, or almost one-third, of those obligated award- and incentive-fee contract dollars. These obligations include award- and incentive-fee payments as well as other contract costs. DOD utilized the contracts in our sample for a number of purposes. For example, research and development contracts accounted for 51 percent (or $26.4 billion) of the dollars obligated against contracts in our sample from fiscal years 1999 through 2003; while non-research-and-development services accounted for the highest number of contracts in our sample. Further, we estimate that most of the contracts and most of the dollars in our study population are related to the acquisition of weapon systems. DOD has the flexibility to mix and match characteristics from different contract types. The risks for both DOD and the contractor vary depending on the exact combination chosen, which, according to the Federal Acquisition Regulation, should reflect the uncertainties involved in contract performance. Based on the results from our sample, about half of the contracts in our study population were cost-plus-award-fee contracts. The theory behind these contracts is that although the government assumes most of the cost risk, it retains control over most or all of the contractor’s potential fee as leverage. On cost-plus-award-fee contracts, the award fee is often the only source of potential fee for the contractor. According to defense acquisition regulations, these contracts can include a base fee—a fixed fee for performance paid to the contractor—of anywhere from 0 to 3 percent of the value of the contract; however, based on our sample results, we estimate that about 60 percent of the cost-plus-award- fee contracts in our study population included zero base fee. There is no limit on the maximum percentage of the value of the contract that can be made available in award fee, although the 20 percent included in the Space-Based Infrared Radar System High development contract we examined was outside the norm. The available award fees on all the award-fee contracts in our study population typically ranged from 7 to 15 percent of the estimated value of the contract. DOD’s use of award and incentive fees is symptomatic of an acquisition system in need of fundamental reform. DOD’s historical practice of routinely paying its contractors nearly all of the available award fee creates an environment in which programs pay and contractors expect to receive most of the available fee, regardless of acquisition outcomes. This is occurring at a time when DOD is giving contractors increased program management responsibilities to develop requirements, design products, and select major system and subsystem contractors. Based on our sample, we estimate that for DOD award-fee contracts, the median percentage of available award fee paid to date (adjusted for rollover) was 90 percent, representing an estimated $8 billion in award fees for contracts active between fiscal years 1999 and 2003. Estimates of total award fees earned are based on all evaluation periods held from the inception of our sample contracts through our data collection phase, not just those from fiscal years 1999 through 2003. Figure 2 shows the percentage of available fee earned for the 63 award-fee contracts in our sample. The pattern of consistently high award-fee payouts is also present in DOD’s fee decisions from evaluation period to evaluation period. This pattern is evidence of reluctance among DOD programs to deny contractors significant amounts of fee, even in the short term. We estimate that the median percentage of award fee earned for each evaluation period was 93 percent and that the contractor received 70 percent or less of the available fee in only 9 percent of the evaluation periods and none of the available fee in only 1 percent of the evaluation periods. March 29, 2006, emphasizing the need to link award fees to desired program outcomes. Award fees have generally not been effective at helping DOD achieve its desired acquisition outcomes, in large part, because award-fee criteria are not linked to desired acquisition outcomes, such as meeting cost and schedule goals and delivering desired capabilities. Instead, DOD programs structure award fees to focus on the broad aspects of contractor performance, such as technical and management performance and cost control, that they view as keys to a successful program. In addition, elements of the award-fee process, such as the frequency of evaluations and the composition of award-fee boards, may also limit DOD’s ability to effectively and impartially evaluate the contractor’s progress toward acquisition outcomes. Most award-fee evaluations are time-based, generally every six months, rather than event-based; and award-fee boards are made up primarily of individuals directly connected to the program. As a result of all these factors, DOD programs frequently paid most of the available award fee for what they described as improved contractor performance, regardless of whether acquisition outcomes fell short of, met, or exceeded DOD’s expectations. High award-fee payouts on programs that have fallen or are falling well short of meeting their stated goals are also indicative of DOD’s failure to implement award fees in a way that promotes positive performance and adequate accountability. Several major development programs— accounting for 52 percent of the available award-fee dollars in our sample and 46 percent of the award-fee dollars paid to date—are not achieving or have not achieved their desired acquisition outcomes, yet contractors received most of the available award fee. These programs—the Comanche helicopter, F/A-22 and Joint Strike Fighter aircraft, and the Space-Based Infrared System High satellite system—have experienced significant cost increases, technical problems, and development delays, but the prime systems contractors have received 85, 91, 100, and 74 percent of the award fee, respectively to date (adjusted for rollover), totaling $1.7 billion (see table 2). DOD can ensure that fee payments are more representative of program results by developing fee criteria that focus on its desired acquisition outcomes. For instance, DOD’s Missile Defense Agency attempted to hold contractors accountable for program outcomes on the Airborne Laser program. On this program, DOD revised the award-fee plan in June 2002 as part of a program and contract restructuring. The award-fee plan was changed to focus on achieving a successful system demonstration by December 2004. Prior to the restructuring, the contractor had received 95 percent of the available award fee, even though the program had experienced a series of cost increases and schedule delays. Importantly, the contractor did not receive any of the $73.6 million award fee available under the revised plan because it did not achieve the key program outcome—successful system demonstration. While DOD stated that award fee motivating excellent contractor performance by only paying award fees for above satisfactory performance arrangements should be structured to encourage the contractor to earn the preponderance of fee by providing excellent performance, it maintains that paying a portion of the fee for satisfactory performance is appropriate to ensure that contractors receive an adequate fee on contracts. In its March 29, 2006 policy memo, DOD reiterated this position and emphasized that less than satisfactory performance is not entitled to any award fee. In its March 29, 2006 policy memo, DOD provided guidance and placed several limitations on the use of rollover. DOD programs routinely engage in award-fee practices that are inconsistent with the intent of award fees, reduce the effectiveness of these fees as motivators of performance, compromise the integrity of the fee process, and waste billions in taxpayer money. Two practices, in particular, paying significant amounts of fee for “acceptable, average, expected, good, or satisfactory” performance and providing contractors multiple opportunities to earn fees that were not earned when first made available, undermine the effectiveness of fees as a motivational tool and marginalize their use in holding contractors accountable for acquisition outcomes. Although DOD guidance and federal acquisition regulations state that award fees should be used to motivate excellent contractor performance, most DOD award-fee contracts pay a significant portion of the available fee for what award-fee plans describe as “acceptable, average, expected, good, or satisfactory” performance. Although the definition of this level of performance varies by contract, these definitions are generally not related to outcomes. Some plans for contracts in our sample did not even require the contractor to meet all of the minimum standards or requirements of the contract to receive one of these ratings. Some plans also allowed for fee to be paid for marginal performance. Even fixed-price-award-fee contracts, which already include a normal level of profit in the price, paid out award fees for satisfactory performance. Figure 3 shows the maximum percentage of award fee paid for “acceptable, average, expected, good, or satisfactory” performance and the estimated percentage of DOD award-fee contracts active between fiscal years 1999 through 2003 that paid these percentages. The use of rollover is another indication that DOD’s management of award-fees lacks the appropriate incentives, transparency, and accountability necessary for an effective pay-for-performance system. Rollover is the process of moving unearned available award fee from one evaluation period to a subsequent evaluation period, thereby providing the contractor an additional opportunity to earn that previously unearned award-fee. We estimate that 52 percent of DOD award-fee contracts rolled over unearned fees into subsequent evaluation periods, and in 52 percent of these periods, at least 99 percent of the unearned fee was rolled over. Overall, for DOD award-fee contracts active between fiscal years 1999 through 2003, we estimate that the total dollars rolled over across all evaluation periods that had been conducted by the time of our review was $669 million. DOD plans to conduct an analysis to to review new contracts to make sure award-fee criteria reflect desired acquisition outcomes and award-fee structures motivate excellent contractor performance by only providing fees for above satisfactory performance determine what the appropriate approving official level should be for new contracts utilizing award fees and issue additional guidance if needed by June 1, 2006. The inconsistent application of DOD’s existing policies on award fees and weapon system development reinforce the need for increased transparency and accountability in DOD’s management of award fees. Although DOD award-fee guidance and federal acquisition regulations state that award fees should be used to motivate excellent contractor performance, most DOD award-fee contracts still pay a significant portion of the available fee for what award-fee plans describe as “acceptable, average, expected, good, or satisfactory” performance. Air Force, Army, and Navy guidance that states rollover should rarely be used in order to avoid compromising the integrity of the award-fee evaluation process; however, about half of the contracts in our study population used rollover. DOD will conduct an analysis of existing award- and incentive-fee data within existing data systems, such as the Defense Acquisition Management Information Retrieval system systems and determine which, if any, is best suited, to capture this type of data and at what cost. DOD expects to complete the study by June 1, 2006. DOD will review and identify possible evaluate the effectiveness of award and incentive fees as a tool for improving contractor performance and achieving desired program outcomes performance measures and determine the appropriate actions by June 1, 2006. In its March 29, 2006 policy memo, DOD tasked Defense Acquisition University to develop an online repository for award- and incentive-fee policy information, related training courses, and examples of good award fee arrangements. Very little effort has gone into determining whether DOD’s current use of monetary incentives is effective. Over the past few years, officials including the Undersecretary of Defense for Acquisition Technology and Logistics and the Assistant Secretary of the Air Force for Acquisition expressed concerns that contractors routinely earn high percentages of fee while programs have experienced performance problems, schedule slips, and cost growth. However, DOD has not compiled information, conducted evaluations, shared lessons learned, or used performance measures to judge how well award and incentive fees are improving or can improve contractor performance and acquisition outcomes. The lack of data is exemplified by the fact that DOD does not track such basic information as how much it pays in award and incentive fees. Such information collection across DOD is both necessary and appropriate. DOD’s use of award-fee contracts, especially for weapon system development, reflects the fundamental lack of knowledge and program instability that we have consistently cited as the main reasons for DOD’s poor acquisition outcomes. DOD uses these fees in an attempt to mitigate the risks that it creates through a flawed approach to major weapon system development. The DOD requirements, acquisition, budgeting, and investment processes are broken and need to be fixed. DOD’s requirements process generates much more demand for new programs than fiscal resources can reasonably support. The acquisition environment encourages launching product developments that promise the best capability, but embody too many technical unknowns and too little knowledge about the performance and production risks they entail. However, a new program will not be approved unless its costs fall within forecasts of available funds and, therefore, looks affordable. Further, because programs are funded annually and departmentwide, cross-portfolio priorities have not been established, competition for funding continues over time, forcing programs to view success as the ability to secure the next funding increment rather than delivering capabilities when expected and as promised. The business cases to support weapon system programs that result from these processes are in many cases not executable because the incentives inherent in the current defense acquisition system are not conducive to establishing realistic cost, schedule, and technical goals. As a result, DOD has to date not been willing to hold its programs or its contractors accountable for achieving its specified acquisition outcomes. Instead, faced with a lack of knowledge and the lack of a sound business case, DOD programs use award-fee contracts, which by their very nature allow DOD to evaluate its contractors on a subjective basis. This results in billions of dollars in wasteful payments because these evaluations are based on contractors’ ability to guide programs through a broken acquisition system, not on achieving desired acquisition outcomes. Implementing our recommendations on award and incentive fees will not fix the broader problems DOD faces with its management of major weapons or service acquisitions. However, by implementing our recommendations, DOD can improve incentives, increase transparency, and enhance accountability for the fees it pays. In particular, moving toward more outcome-based award-fee criteria would give contractors an increased stake in helping DOD to develop more realistic targets upfront or risk receiving less fee when unrealistic cost, schedule, and performance targets are not met. To make this new approach to incentives function as intended, DOD would also need to address the more fundamental issues related to its management approach, such as the lack of a sound business case, lack of well-defined requirements, lack of product knowledge at key junctions in development, and program instability caused by changing requirements and across-the-board budget cuts. Working in concert, these steps can help DOD set the right conditions for more successful acquisition outcomes and make more efficient use of its resources in what is sure to be a more fiscally constrained environment as the nation approaches the retirement of the “baby boom” generation. Last week, DOD issued a policy memorandum on award-fee contracts that takes steps towards addressing several of the recommendations made in our report, and the department has indicated that further actions are planned to address the remaining recommendations. This guidance is a positive first step, but, like so many prior DOD concurrences, its effectiveness will ultimately be determined by how well it is implemented. Identifying who will be responsible for ensuring it is carried out and how progress will be monitored and measured are key ingredients that are missing in the new guidance. We continue to believe that DOD must designate appropriate approving officials to review new contracts to ensure that award-fee criteria are tied to desired acquisition outcomes; fees are used to promote excellent performance; and the use of rollover provisions in contracts is the exception not the rule. Changing DOD award-fee practices will also require a change in culture and attitude. The policy memorandum’s position that it is appropriate to pay a portion of the available award fee for satisfactory performance to ensure that contractors receive an “adequate fee on contracts” is indicative of DOD’s resistance to cultural change. Finally, we encourage the department to fully implement our remaining recommendations including developing a mechanism to capture award- and incentive-fee data and developing performance measures to evaluate the effectiveness of these fees. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. In this statement, we examine fixed-price and cost-reimbursable award- and incentive-fee contracts, as well as contracts that featured combinations of these contract types. These contracts were selected as part of a probability sample of 93 contracts from a study population of 597 DOD award-fee and incentive-fee contracts that were active between fiscal years 1999 and 2003 and had at least one contract action coded as cost-plus-award-fee, cost-plus-incentive-fee, fixed-price-award-fee, or fixed-price incentive valued at $10 million or more during that time. Unless otherwise noted, the estimates in this statement pertain to (1) this population of award- and incentive-fee contracts, (2) the subpopulation of award-fee contracts, or (3) the evaluation periods associated with contracts described in (1) or (2) that had been completed at the time of our review. In the sample, 52 contracts contained only award-fee provisions; 27 contracts contained only incentive-fee provisions; and 14 contracts included both. Estimates of total award fees earned and total award fees that contractors received at least two chances to earn are based on all evaluation periods held from the inception of our sample contracts through our data collection phase, not just those from fiscal years 1999 through 2003. Because the estimates in this report are derived from a probability sample, they are subject to sampling error. All percentage estimates from our review have margins of error not exceeding plus or minus 10 percentage points unless otherwise noted. All numerical estimates other than percentages (such as totals and ratios) have margins of error not exceeding plus or minus 25 percent of the value of those estimates. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
With DOD spending over $200 billion annually to acquire products and services that include everything from spare parts to the development of major weapon systems, our numerous, large, and mounting fiscal challenges demand that DOD maximize its return on investment and provide the warfighter with needed capabilities at the best value for the taxpayer. In an effort to encourage defense contractors to perform in an innovative, efficient, and effective way, DOD gives its contractors the opportunity to collectively earn billions of dollars through monetary incentives known as award and incentive fees. Using these incentives properly--in concert with good acquisition practices--is a key to minimizing waste, maximizing value, and getting our military personnel what they need, when and where they need it. Congress asked GAO to testify on DOD's use of award and incentive fees and the role they play in the acquisition system. This statement highlights the risks of conducting business as usual and identifies the actions DOD needs to take to use these fees more effectively. DOD concurred or partially concurred with the seven recommendations GAO made in a previously issued report on award and incentive fees. GAO looks forward to seeing DOD turn these promised steps into actual policy and practice. DOD's use of award and incentive fees is an issue at the nexus of two areas that GAO has designated "high risk" for DOD--contract management and weapon system acquisition. Contract management has been a long-standing business management challenge for DOD because it often cannot assure that it is using sound business practices to acquire the goods and services the warfighter needs. For weapon system acquisitions, the persistent and long-standing nature of acquisition problems has perhaps made a range of key decision makers complacent about cost growth, schedule delays, quantity reductions, and performance shortfalls. DOD's strategies for incentivizing its contractors, especially for weapon system development programs, reflect the challenges in these areas. DOD programs routinely engage in award-fee practices that do not hold contractors accountable for achieving desired outcomes and undermine efforts to motivate contractor performance, such as evaluating contractors on award-fee criteria that are not directly related to key acquisition outcomes (e.g., meeting cost and schedule goals and delivering desired capabilities to the warfighter); paying contractors a significant portion of the available fee for what award-fee plans describe as "acceptable, average, expected, good, or satisfactory" performance; and giving contractors at least a second opportunity to earn initially unearned or deferred fees. As a result, DOD has paid out an estimated $8 billion in award fees on contracts in GAO's study population, regardless of whether acquisition outcomes fell short of, met, or exceeded DOD's expectations. Despite paying billions of dollars, DOD has not compiled data or developed performance measures to evaluate the validity of its belief that award and incentive fees improve contractor performance and acquisition outcomes. These issues, along with those GAO has identified in DOD's acquisition and business management processes, present a compelling case for change. By implementing the recommendations GAO has made on award and incentive fees, DOD can improve incentives, increase transparency, and enhance accountability for the fees it pays. At the same time, by working more broadly to improve its acquisition practices, DOD can set the right conditions for getting better acquisition outcomes and making more efficient use of its resources in what is sure to be a more fiscally constrained environment.
|
The core mission of Diplomatic Security is to provide a safe and secure environment for the conduct of U.S. foreign policy. Diplomatic Security is one of several bureaus that report to the Undersecretary for Management within State and contains several directorates, including Diplomatic Security’s Training Directorate (see app. II). To implement U.S. statute, the Diplomatic Security Training Directorate trains or helps train Diplomatic Security’s 1,943 law enforcement agents and investigators, 340 technical security specialist engineers and technicians, 101 couriers, and a growing number of new Security Protective Specialists, as well as other U.S. government personnel, and runs several specialized programs designed to enhance Diplomatic Security’s capabilities. In fiscal year 2010, DSTC conducted 342 sessions of its 61 courses and trained 4,739 students. The training directorate is headed by a senior Foreign Service Officer and has three offices, the Offices of Training and Performance Standards, Mobile Security Deployment (MSD), and Antiterrorism Assistance, which do the following: The Office of Training and Performance Standards’ mission is to train and sustain a security workforce capable of effectively addressing law enforcement and security challenges to support U.S. foreign policy in the global threat environment—now and into the future. The office’s mission has grown along with the expanding mission of Diplomatic Security. The Office of Training and Performance Standards encompasses DSTC and is often referred to as DSTC. The office is the primary provider of Diplomatic Security’s training, and its entire mission falls within the scope of this report; its efforts are the focus of our review. The office also provides personal security training to Diplomatic Security and non-Diplomatic Security personnel posted to the high-threat environments, including the 5-week High Threat Tactical (HTT) course designed for Diplomatic Security special agents and Security Protective Specialists operating in high-threat or hazardous environments, the 3-week Security for Non-traditional Operating Environment (SNOE) course designed for Civilian Response Corps and Provincial Reconstruction Team personnel operating in remote areas, and the 1-week Foreign Affairs Counter Threat (FACT) course designed for all U.S. personnel under Chief of Mission authority at high-threat posts such as Afghanistan, Iraq, or Pakistan. The Office of Mobile Security Deployment’s mission is to provide security training and exercises for overseas posts, enhanced security for overseas posts, and counterassault capability for domestic and overseas protective security details. The first of these missions—to provide training to U.S. government personnel and dependents at posts abroad—falls within the scope of this report. The Office of Antiterrorism Assistance’s mission is to build the counterterrorism capacity of friendly governments, enhance bilateral relationships, and increase respect for human rights. Because of its exclusive training of non-U.S. government personnel, the Office of Antiterrorism Assistance falls outside the scope of this report. Diplomatic Security’s training budget grew steadily from fiscal years 2006 to 2010—increasing from approximately $24 million in fiscal year 2006 to nearly $70 million in fiscal year 2010 (see table 1). During this period, Diplomatic Security’s training budget increased from 1.5 percent to 3 percent of the bureau’s total budget. The Diplomatic Security Training Directorate is responsible for training Diplomatic Security’s over 3,000 direct hires to carry out various security functions (see table 2). The size of Diplomatic Security’s direct-hire workforce has more than doubled since 1998. Recently, Diplomatic Security’s reliance on contractors has grown to fill critical needs in high- threat posts. According to DSTC officials, they also rely on contractors to support course development and serve as instructors in many of their courses. In addition to training Diplomatic Security personnel, the Training Directorate also provides training to non-State personnel supporting embassy security functions such as the Marine Security Guards and Navy Seabees, as well as to personnel from other federal agencies through its high-threat training and information security awareness courses. To ensure the quality and appropriateness of its training, Diplomatic Security primarily adheres to Federal Law Enforcement Training Accreditation standards, along with other statutory and State standards. In 2005, Diplomatic Security incorporated the FLETA standards into its standard operating procedures, using a course design framework tailored for the organization. To meet the combination of FLETA and other standards, DSTC integrates both formal and informal feedback from evaluations and other sources into its courses. However, DSTC does not have the systems in place to obtain feedback from its entire training population. Diplomatic Security’s training responsibilities are established by a number of statutory standards and State Department policies. The Omnibus Diplomatic Security and Antiterrorism Act of 1986, as codified at section 4802 of title 22 of the United States Code, provided the security authorities for the Secretary of State. The Secretary of State delegated these security responsibilities, including law enforcement training, to Diplomatic Security and granted it authority to establish its own training academy. Diplomatic Security also follows policy guidance and procedures found in State’s Foreign Affairs Manual (FAM) and its Foreign Affairs Handbooks, which also establish Diplomatic Security’s Training Directorate. Diplomatic Security is accredited by and relies primarily on the standards of the FLETA process. The FLETA Board was established in 2002 to create and maintain a body of standards to enhance the quality of law enforcement training and to administer an independent accreditation process for federal law enforcement agencies. The voluntary accreditation process provides assurance that every 3 years, the agency carries out a systematic self-assessment to ensure the standards established by the law enforcement community are met; each self-assessment must be verified by FLETA’s external peer reviewers, whose findings are then reviewed by a committee of the FLETA Board. FLETA standards are designed to describe what must be accomplished; however, it is up to each agency to determine how it will meet the standards. Agencies may submit applications to have their basic agent and instructor development courses accredited, and if they obtain accreditation for both courses, they can apply for academy accreditation. In 2010, FLETA revised its standards. (For more details on the FLETA process see app. III.) Beginning in 2005, DSTC established standard operating procedures in order to comply with FLETA and other standards. In 2005, Diplomatic Security began hiring training professionals and created the Instructional Systems Management division to formalize course development, instead of relying solely on the knowledge of experienced personnel and subject matter experts. According to DSTC officials, the formalized process resulted in greater consistency in how courses are developed and taught. Diplomatic Security was the first federal agency to ever receive accreditation through the FLETA process, in 2005, and was reaccredited in 2008. (For more details on DSTC’s accreditation results see app. IV.) DSTC is currently undergoing a new cycle of reaccreditation. DSTC officials expressed confidence that their courses and the academy would be reaccredited. To meet accreditation standards and its training needs, DSTC uses an industry-recognized training framework for course design and development. According to a senior FLETA official, 44 percent of FLETA standards are based on this training framework. The seven-phased DSTC framework is applied to new courses or course revisions (see fig. 1 and app. V for examples of the documents and reports created during the different phases of the framework and hyperlinked to the figure). Throughout the process and at each phase, DSTC involves division chiefs, branch chiefs, subject matter experts, and its instructional staff. At the end of each phase, a report is produced for a DSTC training advisor to approve, before the process progresses to the next phase. The seven phases are Proposal phase: DSTC staff analyzes the request for development or revision to a training course and makes recommendations to senior management on whether to proceed. Analysis phase: DSTC staff examines the audience, identifies job tasks and job performance measures, selects the instructional setting, and validates cost estimates. A task list is developed to guide initial course development, which involves subject matter experts in verifying the job tasks. Design phase: DSTC staff determines the training objectives, lists course prerequisites, identifies needed learning objectives, and establishes the appropriate performance tests. Development phase: DSTC staff develops the appropriate instructional materials, reviews and selects existing course materials, and develops the necessary coursework. Implementation phase: A pilot course is created and taught by an approved instructor to a targeted audience. The pilot course is tested and observed by both subject matter experts and instructional design staff. Evaluation phase: DSTC staff and the students evaluate the effectiveness of the training. DSTC conducts three types of evaluations: 1. tier-1 evaluations of the training and the instructors by the students shortly after taking the course, 2. tier-2 evaluations to check extent of knowledge and skills transfer to the students during the course, and 3. tier-3 evaluations of the students’ ability to apply the training on the job 6 to 12 months after training depending on when the skills are used. According to DSTC officials, tier-1 and tier-3 evaluations are generally made up of survey questions with some short answers, while tier-2 evaluations involve testing students through either a practical or written exam, or both. Revision: Courses go through the revision process at least every 5 years, prompted and guided in part by evaluations and feedback from students, supervisors, and other stakeholders. DSTC applies its training framework to all courses, not just the courses for which it seeks accreditation through the FLETA process. We previously reported that agencies need to ensure that they have the flexibility and capability to quickly incorporate changes into training and development efforts when needed. According to DSTC, its training framework allows for flexibility and supports frequent evaluation, giving Diplomatic Security the ability to respond to changes in its mission and its customers’ requirements. Moreover, agency officials noted that because DSTC’s training framework model is well established for developing courses, Mobile Training Teams and Diplomatic Couriers—-both of which provide training to meet their own organizational needs outside of DSTC— use the model as a foundation for tailoring their courses. DSTC uses a variety of methods to collect feedback from students, supervisors, and other stakeholders. FLETA standards and DSTC’s standard operating procedures require DSTC to collect feedback and use significant feedback to shape and revise courses. According to DSTC, feedback is valued because it demonstrates the extent to which the training is yielding the desired outcomes in performance and helps instructional staff identify what should be modified to achieve the outcomes more effectively. DSTC receives feedback from multiple sources, including tier-1, tier-2, and tier-3 evaluations, as well as focus groups, in-country visits, inspection reports, counterparts across the government, and directives from senior officials—such as ambassadors. For instance, following the 1998 embassy bombings, DSTC implemented the State-convened Accountability Review Board recommendation to enhance surveillance detection and crisis management training provided to the Regional Security Officers. In addition, DSTC regularly meets with other State offices and bureaus to discuss how to maintain effective training or identify needed changes to course material. For example, DSTC meets quarterly with the Office of International Programs, which is responsible for managing the RSOs posted overseas, to ensure that the basic Regional Security Officer course materials remain relevant. HTT provides another example of course revision. HTT was initially 39 days long but was shortened to about 27 days in response to senior management’s need to get more people overseas faster, as well as feedback from agents indicating that they were not extensively using certain aspects of the course such as land navigation and helicopter training. (See the fig. 2 text box concerning revisions to the FACT course for more examples of how feedback is incorporated into course revisions.) On the basis of interviews with Diplomatic Security personnel at nine posts and training sites, we found that DSTC’s overall training was viewed as high-quality and appropriate. Diplomatic Security personnel we interviewed generally agreed that DSTC’s training was a significant improvement compared with the training they received prior to DSTC’s accreditation. Because of difficulties obtaining satisfactory response rates for some evaluations, identifying users of its distributed learning efforts, and contacting non-State students, DSTC officials acknowledged that their systems do not have the capability to obtain a comprehensive evaluation of all of their training as required by their training framework. However, DSTC officials said they are exploring ways to address identifying users of its distributed learning efforts and contacting non-State students. We previously reported that evaluating training is important and that the agencies need to develop a systematic evaluation processes to assess the benefits of training development efforts. According to DSTC officials, the tier-1 response rate for most courses averages about 80-90 percent, and the tier-3 evaluations response rate for its courses averages about 30 percent for 6-month feedback. DSTC officials acknowledged that they currently do not have a system in place to identify who has accessed distributed learning and certain other learning tools, and thus they have few effective options for soliciting student feedback on those tools. According to DSTC officials, distributed learning efforts are growing as part of DSTC efforts to save costs and reach people in the field. DSTC is exploring several different ways to deliver distributed learning efforts. For example, Diplomatic Security is expected to provide personnel recovery training to about 20,000 people— many of whom are non-State personnel. This training will be done primarily through online distributed learning as well as classroom instruction. In addition to its distributed learning efforts, DSTC sends out to posts its “Knowledge from the Field” DVDs, an information and professional development product that includes lessons learned from attacks and other incidents at consulates and embassies. DSTC is also developing new interactive computer-based training simulations. However, DSTC’s systems do not have the capability to track who is accessing its online materials or who is accessing the DVDs. Without knowing who to send evaluations to, DSTC cannot solicit feedback to see if these efforts are helpful or effective. According to DSTC officials, DSTC also has difficulty obtaining feedback from non-State personnel, which constitute a growing portion of its student body because of DSTC’s provision of training to multiple agencies. For example, DSTC provides information awareness and cybersecurity training to State, as well as the Department of Homeland Security and National Archives and Records Administration, among others. In addition, as noted in figure 2, the number of students taking FACT training, which is provided to non-State personnel, has increased significantly. While DSTC collects feedback after each lesson and course via tier-1 evaluations and makes efforts to collect tier-3 evaluations, according to DSTC officials, it is the responsibility of the students’ home agencies to send out evaluations to their personnel on the training that DSTC provides. According to DSTC officials, evaluations conducted by other agencies are not automatically shared with DSTC. Instead, to measure the effectiveness of its training for non-State personnel, DSTC relies on voluntary comments from the agencies or individual students from those agencies. DSTC officials noted that they are pursuing access to a more robust learning management system to address some of the difficulties with their existing systems. Learning management systems are software applications for the administration, documentation, tracking, and reporting of training programs, classroom and online events, e-learning programs, and training content. DSTC officials stated that their current suite of software, including Microsoft Office SharePoint and several State-specific systems, does not provide all the functionality they need to effectively evaluate all of their courses. DSTC has increased its reliance on using Microsoft Office SharePoint to store current learning materials for DSTC courses on its intranet, but the software does not have an evaluation mechanism in place. According to DSTC officials, they were interested in procuring a learning management system that would cost about $284,000, with additional maintenance and technical support costing about $28,500 a year. In 2009, DSTC officials conducted a cost-benefit analysis by examining the savings from converting two existing courses into courses delivered entirely online. The analysis indicated that State would save about $2 million a year in travel costs alone as well as give DSTC a number of additional functionalities. According to DSTC, as of May 2011, its request to purchase the system is under review, and DSTC was advised to explore FSI’s learning management system. According to FSI and DSTC officials, DSTC began discussions with FSI about the use of FSI’s learning management system. FSI officials noted that FSI’s learning management system has or can be modified to have several of the capabilities DSTC is looking for, including the ability to limit access to specific groups (such as Diplomatic Security personnel or non-State personnel), to distribute and evaluate distributed learning, and to e-mail evaluations to non-State students. According to DSTC officials, DSTC and FSI are working to create a subdomain in FSI’s learning management system for DSTC content. They are also discussing the process for using the learning management system for classified material. As of May 2011, these matters are still under discussion. Diplomatic Security developed career training paths for its personnel that identify the training required for each major job position at different career levels. Using various systems, Diplomatic Security can track instructor-led training that its personnel take. However, DSTC’s systems do not have a way of accumulating the names of personnel who have not taken required courses. DSTC also faces difficulties tracking everyone who receives training through its distributed learning and its courses for non-State personnel. However, DSTC is working to address these difficulties. DSTC established career training paths that specify the required training for entry-level, midlevel, and in some cases senior-level personnel according to their career specialty (see fig. 3, and for a description of the specialty positions, see table 2 above). All Diplomatic Security Foreign Service career specialists attend required State orientation for Foreign Service personnel provided by FSI. (For additional information on training at FSI, see our recently issued report on State training.) As they progress from entry level to midlevel, and in some specialties to senior level, Diplomatic Security personnel follow their career training paths. For example, after orientation, the Security Engineering Officers take technical and fundamental training. As the SEOs move on to midlevel positions, they complete a variety of in-service training courses. All midlevel and most senior-level positions require leadership and management training provided by FSI. DSTC officials noted that all DSTC training fulfills either a career training path requirement or some other training requirement. For example, outside of standard training courses, DSTC provides specialized training to meet evolving threats, such as HTT, that is required for special agents at high-threat posts. See appendix VI for additional details on the training requirements for different career paths. DSTC uses various systems to track participation in its training. DSTC relies on State’s Career Development and Assignments Office and its registrar database to keep records, in addition to an internal tracker for participants in FACT training. State’s Career Development and Assignments office also provides career development guidance and is responsible for ensuring that State personnel attend training required for upcoming assignments. For example, when an agent is assigned to a high- threat post, the office checks to make sure the agent has taken the HTT course; an agent who has not taken the course is scheduled for training, and must complete the course prior to deploying. Both the DSTC registrar and the Career Development and Assignments office use the Student Training Management System to track the training completed by State personnel. This system is State’s registrar system for maintaining personnel training records; it records enrollments, no shows, and completions. The Student Training Management System regularly provides updated data directly to the Government Employee Management System, State’s human resources management system, which populates training information into employee personnel records. The DSTC registrar office and State’s Career Development and Assignments office work together to confirm completion of training before personnel move on to their next assignment. However, if State employees need to demonstrate course completion, they can access the Student Training Management System online to retrieve a copy of their training record from their personal records and print out an unofficial transcript for their supervisor; alternatively, their supervisor can contact the DSTC registrar’s office to verify that the student has completed the course. The registrar database has the ability to verify personnel who have taken high-threat training, but does not have a way of accumulating the names of these personnel. Because State is responsible for the safety and security of U.S. personnel under Chief of Mission authority and requires high-threat training for all personnel at high-threat posts, DSTC officials noted that they have instituted unofficial methods of tracking completion of the training for those going to these posts. DSTC designed and implemented the FACT tracker on its internal web site to log in all personnel who take the class. The FACT tracker provides a continuously updated unofficial document listing all personnel who have taken the FACT course—which includes non-State students. The RSOs in high-threat posts can access the FACT tracker to check on new arrivals to see if they have taken the course. Those who have not completed FACT must remain within the safety of the compound until they are sent home. DSTC officials acknowledged that in the past—before the FACT tracker—they used graduation photographs of FACT graduates to ensure that personnel completed the required training. This was a flawed verification process since students could opt out of having their photos taken. In addition to the FACT tracker, Diplomatic Security maintains a separate spreadsheet of over 700 agents who have taken HTT, which is always available for the director of Diplomatic Security to consult. This enables the director to quickly determine which agents are eligible for assignments to support temporary needs at high-threat posts. According to DSTC officials, DSTC faces challenges in ensuring that personnel complete all required training, particularly in tracking personnel who use distributed learning efforts. However, DSTC has initiatives in place to address some of these issues. The challenges stem from a combination of factors, including training schedules that are constrained by the lack of resources and staff. This creates an obstacle for personnel who cannot fit the training into their work schedules or whose jobs take priority. According to several Diplomatic Security personnel, staff often do not have the time to take in-service training when required, in part because of scheduling constraints. For example, staff could be on temporary duty or travel when in-service training is offered. In addition to the costs for travel to attend in-service training at other posts, several posts are understaffed. According to the Diplomatic Security personnel, they often do not have enough personnel to support the post when staff go to in- service training. Even though DSTC relies on the Student Training Management System, the system does not allow DSTC to effectively track who has or has not taken what course, when, and also be able to schedule a person for the next available course. According to DSTC officials, their system does not have the ability to automatically identify how many people required to take a given course have not yet taken it. Additionally, agents are required to pass a firearms requalification every 4 months when they are posted domestically and once a year if posted overseas. It is the agents’ and supervisors’ responsibility to keep track when their next requalification is due. According to DSTC officials, when agents are posted overseas at certain posts where firearms training is restricted, they often fall behind on their requalification because this can be completed only at a limited number of facilities. As a result, according to Diplomatic Security officials, some personnel fail to maintain weapons qualification, especially if they have been overseas for a number of years. DSTC has increased its use of distributed learning to enhance training of its workforce, but it does not have a way to keep records of participation or performance of personnel who take training through distributed learning. For example, DSTC shares interactive online content on Microsoft Office SharePoint for personnel to use, but according to DSTC officials, SharePoint does not have a tracking mechanism to see who has accessed the content. In another example, DSTC provides distributed learning on OC Spray (also known as pepper spray) that is required every year. However, DSTC cannot say for certain if its personnel have accessed the training and does not have the systems in place to track distributed learning efforts. DSTC is working to develop the ability to ensure that personnel complete all required training and to keep track of who completes DSTC training through distributed learning. DSTC officials stated that their current suite of software systems does not include the capabilities needed to track all their training efficiently and effectively, in particular training delivered through distributed learning. As noted above, DSTC has begun discussions with FSI about the possibility of using FSI’s learning management system or procuring its own system to help DSTC improve its ability to track training. As of May 2011, it appears that some of DSTC’s tracking and evaluation needs may be met through FSI’s learning management system. DSTC is in the process of working with FSI to determine how to meet these needs. DSTC faces several challenges that affect its operations. In particular, DSTC is faced with training Diplomatic Security personnel to meet their new roles and responsibilities in Iraq as the U.S. military transfers to State many of its protective and security functions for the U.S. diplomatic presence. In addition to this expanded training mission, State has proposed a fivefold increase in the amount of training DSTC provides to non-Diplomatic Security personnel. At the same time, many of DSTC’s training facilities pose additional challenges. DSTC lacks a consolidated training facility of its own and therefore uses 16 different leased, rented, or borrowed facilities at which DSTC’s training needs are not the priority. Moreover, several of the facilities do not meet DSTC’s training needs and/or are in need of refurbishment. According to Diplomatic Security officials, this situation has proven inefficient; it has expanded training times and likely increased costs. To meet some of its current needs, in 2007 DSTC developed an Interim Training Facility, and in 2009 State allocated funds from the American Recovery and Reinvestment Act and other acts to begin the process of building a consolidated training facility. State is in the process of identifying a suitable location for the facility. With the planned withdrawal of U.S. military forces from Iraq in December 2011, Diplomatic Security is expected to assume full responsibility for ensuring safety and security for the U.S. civilian presence. As part of its new responsibilities, Diplomatic Security plans to add critical support services that the U.S. military currently provides, and which Diplomatic Security has had little or no experience in providing, including downed aircraft recovery, explosives ordnance disposal, route clearance, and rocket and mortar countermeasures, among others. Consequently, Diplomatic Security is leveraging Department of Defense expertise and equipment to build the capabilities and capacity necessary to undertake its new missions. For example, the Department of Defense is assisting Diplomatic Security with the operation of a sense-and-warn system to detect and alert to artillery and mortar fire. As a result of its increased security responsibilities, Diplomatic Security anticipates substantial use of contractors to provide many of these specialized services. Nevertheless, Diplomatic Security personnel will still need training in order to properly manage and oversee those contractors and to perform those services for which contractors are not being hired. DSTC noted that it is following events in Iraq, seeking feedback from Embassy Baghdad, and evaluating and updating its training programs to ensure they remain relevant to the needs of post personnel and conditions on the ground. To identify training needs, DSTC is collaborating with multiple offices on various contingency plans. DSTC is a member of the Diplomatic Security Iraq Transition Working Group. The purpose of this working group is to identify and analyze the structural, logistical, personnel, and training impacts of the transition on Diplomatic Security and the Regional Security Office in Baghdad as U.S. military forces draw down in Iraq. Additionally, DSTC is an active participant in the Contingency Operations Working Group, whose purpose is to improve collaboration within Diplomatic Security to support RSO operations in Iraq, Afghanistan, Pakistan, Yemen, and Sudan. DSTC also is a member of the Iraq Policy and Operations Group, chaired by State’s Bureau of Near Eastern Affairs, and the Iraq Training Course Advisory Group, chaired by FSI. DSTC is developing training plans to address various contingencies arising from anticipated Diplomatic Security personnel increases in Iraq and introduction of new equipment. Regarding personnel increases, DSTC is identifying resources and planning to train additional security personnel to meet Embassy Baghdad’s goal of filling 84 Security Protective Specialist positions and 25 new special agent positions in Iraq. High-threat courses are also being added to accommodate additional Diplomatic Security personnel being assigned to Iraq and other high-threat locations. For example, an additional four HTT courses were added to the DSTC training schedule, making a total of 13 course offerings in fiscal year 2011 compared with 9 in fiscal year 2010. According to DSTC, it is endeavoring to meet the need for new capabilities and equipment. DSTC, in coordination with the Diplomatic Security Mine- Resistant/Ambush Protection (MRAP) armored vehicles working group, is developing ways to integrate MRAP training into Diplomatic Security courses, and as of March 2011 had completed the design and development of a training course. This effort includes acquiring an MRAP egress trainer at the ITF in West Virginia and one at the U.S. embassy in Baghdad. To address expanding RSO air operations, DSTC acquired UH-1 and CH-46 nonflyable helicopter airframes from Cherry Point Marine Air Station in order to improve air operations training. An additional helicopter airframe, a CH-53, is also being acquired from the same location. For FACT students, protective vests and helmets were obtained to better accustom students to working conditions on the ground. Other HTT additions will include personnel recovery, tactical communications, and tactical operations command training. DSTC is working closely with the Iraq Training Course Advisory Group to develop a new Iraq predeployment immersion training course for civilian employees, as well as special agents, which will combine both security and operational exercises. According to Diplomatic Security officials, this training will likely increase the time needed to get trained Diplomatic Security personnel into the field. Despite these efforts, Diplomatic Security noted that the locations, personnel numbers, and resources that Diplomatic Security will require in Iraq are being finalized through the various transitional working groups mentioned above, as well as by Embassy Baghdad and U.S. Forces-Iraq. However, according to State’s Inspector General, Diplomatic Security does not have the funding, personnel, experience, equipment, or training to replicate the U.S. military’s security mission in Iraq. Similar concerns were raised by the Commission on Wartime Contracting and a majority report issued by the Senate Foreign Relations Committee. Diplomatic Security acknowledged it is not designed to assume the military’s mission in Iraq and will have to rely on its own resources and the assistance of the host country to protect the U.S. mission in the absence of the funding, personnel, equipment, and protection formerly provided by the U.S. military. Furthermore, with clear deadlines in place for the U.S. military departure from Iraq, delays in finalizing State’s operations in Iraq could affect DSTC’s ability to develop and deliver any additional required training. In addition to the resource demands placed on DSTC by the pending drawdown of U.S. military forces in Iraq, DSTC has seen a significant increase in the number of U.S. personnel to whom it provides training, especially high-threat training such as FACT, SNOE, and HTT (see fig. 4). Most notable is the increase in the number of non-Diplomatic Security personnel to whom Diplomatic Security must provide training since both FACT and SNOE are designed for nonagents. For example, the number of U.S. personnel taking high-threat training in fiscal year 2006 was 971. That number more than doubled in fiscal year 2010 to 2,132. In addition to the significant increase in students, State has levied additional training requirements on DSTC that may further strain DSTC’s resources. State’s 2010 Quadrennial Diplomacy and Development Review (QDDR) stated that all personnel at high-threat posts, as well as those at critical-threat posts, will now receive FACT training. According to Diplomatic Security officials, this change in policy would increase the number of posts for which FACT is required from 23 to 178, increasing the number of students taking FACT each year from 2,132 in fiscal year 2010 to over 10,000. DSTC officials noted that they lack the capacity to handle so many students and that current FACT classes are already filled to DSTC’s capacity. DSTC would need to locate or build additional driving tracks, firearms ranges, and explosives ranges, as well as obtain instructors and other staff to support such a dramatic increase in students. At a cost of almost $4,000 per student, not including the need to develop additional facilities, this requirement could cost government agencies over $30 million. The QDDR did not identify additional resources or facilities to support this decision. According to Diplomatic Security officials, State has not completed an action plan or established time frames to carry out the QDDR recommendation. Given these difficulties, Diplomatic Security officials noted that they did not see how the new requirement could be implemented. The Diplomatic Security Training Directorate’s three offices, including DSTC, use 16 facilities to accomplish their training missions (see app. VII), which DSTC officials believe is inefficient and more costly than a consolidated training facility would be. For example, DSTC maintains a fleet of vehicles to transport students from one training facility to another. In 2009, DSTC officials estimated that students spent 1 week of the then 8- week HTT course in transit. According to DSTC officials, until recently the Training Directorate used four additional facilities, including three other military bases, but military officials at those bases decided that they could no longer accommodate DSTC and still meet their own training needs. This forced DSTC to find or make use of alternative training sites. Diplomatic Security leases, rents, or borrows all the facilities it uses, and the number of facilities in use at any given time and how they are used will vary based on training requirements and facility availability. For example, although Marine Corps Base Quantico is primarily used for firearms training, Diplomatic Security also uses its ranges for land navigation and its mock villages for scenario training with nonlethal ammunition. According to DSTC officials, because Diplomatic Security does not own the facilities it uses (or the land on which they are built, in the case of its ITF), its access to some facilities may be constrained by the facility owners. For example, Diplomatic Security uses the firearms ranges at Marine Corps Base Quantico to train with heavier weapons that none of its other facilities can accommodate (see fig. 5). However, according to Diplomatic Security officials, the Marines occasionally force Diplomatic Security to change its training schedule, sometimes with minimal notice, which increases costs and makes it difficult for DSTC staff to meet training objectives within the time available. DSTC noted, however, that the Marines work with them to minimize the disruptions to Diplomatic Security training at Marine Corps Base Quantico. Several of the leased facilities, notably the State Annex (SA) buildings, do not meet DSTC’s needs. For example, SA-7, in Springfield, Virginia, was originally leased commercially in the 1970s when, according to Diplomatic Security officials, Diplomatic Security had fewer than 500 special agents, less than one-third of the approximately 1,900 it has now. Both SA-7 and SA-31 are overcrowded and need various repairs, according to Diplomatic Security officials, in part because of disputes between Diplomatic Security and its lessor over which party is responsible for structural repairs such as leaks in the ceiling, repairs to water pipes, and repairs to the ventilation systems (see fig. 6 for pictures of SA-7). DSTC’s main firearms ranges are located in these buildings, but according to DSTC officials, the ranges are small and have some unusable firing lanes (see fig. 6). Because of the limitations of its facilities, Diplomatic Security has had to improvise with makeshift solutions to provide some types of training, for example, placing tape on the floors of its garage at SA-11 to simulate walls for conducting room-entry training (see fig. 7). DSTC officials commented that this was not the most effective way to conduct training. To help meet the training demands of its growing mission, DSTC has identified alternate sites as backup training locations and used them in the past year when other facilities could not be used to meet training requirements. For example, the HTT course used a paintball park in 2010 when Marine Corps Base Quantico could not accommodate DSTC’s final practical exercise. As noted below, with the increased capability at the ITF, Diplomatic Security has been able to consolidate some functions and reduce, but not eliminate, the need for other facilities. In April 2011, Diplomatic Security officials stated that DSTC began firearms training and requalifications at the Federal Law Enforcement Training Center’s Cheltenham, Maryland, facility. Diplomatic Security now has access to the firing ranges at Cheltenham to conduct agents’ firearms requalifications, as well as support office, classroom, and storage space—allowing them to use the small SA-7 firing range as a backup range. Recognizing that its existing facilities were inadequate, in 2007, according to DSTC officials, Diplomatic Security signed a 5-year contract with one of its lessors, Bill Scott Raceway, to fence off 12.5 acres of land and build a modular Interim Training Facility in Summit Point, West Virginia, for approximately $10 million (see fig. 8). The facility includes a number of training features that Diplomatic Security needs, including a gymnasium with mat rooms, a two-story indoor tactical maze, an indoor firing range, a video-based firearms simulation, and a mock urban training area. As the ITF is located on Bill Scott Raceway land, it is colocated with the facilities Diplomatic Security leases to provide driver training, some small arms training, bunker training, and small explosives demonstrations (see fig. 9). Diplomatic Security acknowledged that the ITF is helping it meet several of its training needs, including most defensive tactics training and scenario training with nonlethal ammunition. Nevertheless, Diplomatic Security officials noted that the ITF is only a stopgap solution with inherent limitations and cannot meet a number of Diplomatic Security’s training needs such as the firing of heavier weapons, the use of more powerful explosives to train agents in incident management, and the integrated tactical use of driving and firearms training in a mock urban environment. The ITF also lacks space for Diplomatic Security to train its personnel for many of the additional missions that they are expected to take over from the U.S. military in Iraq, such as land navigation and downed aircraft recovery, among others. In addition, the ITF lacks many of the support services that a training academy might otherwise have, such as campus housing; adequate classroom, office, and dining areas, and storage areas for the explosives used in training. After years of unsuccessful funding efforts, in 2009 State allocated $118.1 million in American Recovery and Reinvestment Act and Worldwide Security and Protection funds to acquire a site for, design, and build the Foreign Affairs Security Training Center (FASTC), a consolidated training facility (see table 3). State began the search for a dedicated training facility in 1993 and revisited the need in 1998 following the embassy bombings in Africa prior to developing the Interim Training Facility. In 2004, State received funding to develop the Center for Antiterrorism and Security Training. In 2006, when the plans for locating such a center at Aberdeen Proving Ground were not successful, the development of CAST was abandoned and Diplomatic Security sought guidance from State’s legal office. According to Diplomatic Security officials, based on the legislative language, State’s legal office stated that no specific site was indicated. Therefore, according to officials, based on Diplomatic Security’s critical need for an antiterrorism training center, the funds could be spent on a temporary facility. Consequently, the remaining funds were used to expand Diplomatic Security’s use of the Bill Scott Raceway facilities and develop the ITF. State also informed us that congressional staff were briefed regarding the use of funds appropriated for CAST. In June 2009, the U.S. General Services Administration announced that it had initiated the search on behalf of State for an appropriate space to build the FASTC, thereby initiating development of the consolidated facility. According to State and General Services Administration officials, State obligated approximately $10.6 million of the American Recovery and Reinvestment Act funds on architectural planning and project management. State obligated the remaining Recovery Act funds by transferring them to the General Services Administration to continue the identification and development of the FASTC. State also allocated about $48.1 million of fiscal year 2009 and fiscal year 2009 supplemental Worldwide Security and Protection funds and an additional $17.6 million of fiscal year 2010 Worldwide Security and Protection funds, all of which were obligated to the General Services Administration to build the FASTC. Subsequent phases of the project are expected to be wholly funded through Worldwide Security and Protection funds. Diplomatic Security expects future costs to be approximately $30 million annually. Diplomatic Security received no additional funds in the fiscal year 2011 budget, and the administration did not include additional funds in its fiscal year 2012 budget request; nevertheless, State and the General Services Administration continued development of the FASTC. After going through a formalized process of identifying a location and working with the General Services Administration, State identified a potential location for the FASTC in Queen Anne’s County, Maryland. State had planned to begin building by early 2011; however, on June 28, 2010, State and the General Services Administration determined that the site would no longer be considered, because of local concerns regarding environmental and other land use issues that could delay the project for several years. State subsequently expanded its FASTC criteria, most notably increasing the acceptable distance from Washington, D.C., and—because of a presidential memorandum issued in June 2010 requesting that agencies try to use existing federal land instead of purchasing new property—focusing the search on publicly owned properties. The General Services Administration evaluated approximately 40 sites against the revised site criteria, which include four steps to determine the viability of the site for placement of the FASTC project. Step 1 evaluates the site regarding public ownership, size, the ability to support 24/7 operations, climate conditions, and proximity to Diplomatic Security headquarters. A site that meets Step 1 criteria continues on to Step 2, which evaluates the site’s developable area, compatible surroundings, ease of acquisition, life support and community support, and suitable climate, and includes performing initial test fits of the site. A site that meets Step 2 criteria will move on to Step 3, in which a feasibility study is conducted on the site. The feasibility study takes into consideration the mission, program requirements, phasing, risk, cost, procurement, environmental assessment, and utilities. Step 4 of the criteria is to conduct an Environmental Impact Statement under the National Environmental Policy Act of 1969 for the preferred site. Two of the evaluated sites met the Step 1 criteria. One site also met Step 2 criteria, and moved on to Step 3 in which a feasibility study was completed. The second site under consideration is currently being evaluated under Step 2 criteria. If the site meets Step 2 criteria, the process will continue to Step 3 and a feasibility study will be conducted. Once both sites have been assessed, senior department officials will make a recommendation on which site will proceed to Step 4. Environmental studies will be conducted on the selected site, and the master plan and construction documents will be completed. Step 4 environmental studies are estimated to take 18 to 24 months to complete. Construction could begin, pending funding availability, after the studies and construction documents are complete. State officials noted that in an ideal situation they could begin building the FASTC by the end of 2013; however, they said it was difficult to know what environmental obstacles, if any, they might encounter and how those obstacles might affect the FASTC’s development. State expects the FASTC will include state-of-the-art classrooms, simulation and practical applications laboratories, administrative support offices, and a fitness center to meet soft skill training needs. State plans to construct a series of indoor and outdoor weapons firing ranges, an explosives demonstration area, several mock urban environments designed to simulate a variety of urban scenarios, and driving tracks to meet its hard skill training needs. State also expects to provide various support elements, including dormitories, a dining facility, physical fitness facilities to include an athletic field and track, bike and jogging trails, and on-site medical and fire emergency services. State expects to build, enhance, or rely on existing infrastructure, such as power, potable water, wastewater treatment, and telecommunications capabilities. U.S. diplomats and other personnel at overseas diplomatic posts face a growing number of threats from global terrorism to cyberattacks and, in some countries, from constant dangers due to the violence of war or civil unrest. To counter these growing threats, State has expanded the mission of its Bureau of Diplomatic Security, with a corresponding rapid increase in its staffing. As a result, DSTC has had to meet the challenge of training more personnel to perform additional duties while still getting Diplomatic Security’s agents, engineers, technicians, and other staff—as well as a growing number of personnel outside of its workforce—into the field, where they are needed. DSTC has largely met this challenge. Certain issues, however, constrain the effectiveness of DSTC’s systems. First, DSTC is shifting more of its training online to better serve its student population, but does not have the systems needed to evaluate the training’s effectiveness despite its own standards to do so. Without this feedback, DSTC will be less able to ensure the effectiveness of and improve the training it provides. Second, DSTC systems do not accurately and adequately track the use of some of its training. For example, DSTC cannot identify who has not taken required training. Consequently, DSTC cannot be assured that all personnel are adequately trained to counter threats to U.S. personnel, information, and property. DSTC also faces a number of challenges as a result of an increasing number of training missions, particularly in Iraq, and inadequate training facilities. These challenges should be addressed strategically; however, State’s recent effort to conduct a strategic review, the QDDR, added to DSTC’s training missions. Specifically, the QDDR levied a requirement on Diplomatic Security to quintuple its student body by providing FACT training to an additional 8,000 students without addressing necessary resources. Without an action plan and associated time frames to meet the new requirement, it is unclear to what extent State can accomplish its training mission and ensure the security preparedness of civilian personnel assigned overseas. We recommend that the Secretary of State 1. develop or improve the process to obtain participant evaluations for all of DSTC required training, including distributed learning efforts; 2. develop or improve the process to track individual DSTC training requirements and completion of DSTC training; and 3. develop an action plan and associated time frames needed to carry out the QDDR recommendation to increase the number of posts at which FACT is required. We provided a draft of this report to the Department of State. State provided written comments, which are reproduced in appendix VIII. State agreed with all three recommendations, and noted several steps it is taking or is planning to take to address the recommendations. In particular, DSTC noted that it will seek an electronic survey tool to enhance its evaluation efforts and is exploring ways to modify existing State computer systems to enhance its ability to track training. In addition, Diplomatic Security is working with the other State offices to set parameters for expanding FACT training to additional personnel. State also noted that existing Diplomatic Security training facilities and instructor resources are at maximum capacity, and emphasized DSTC’s need for a consolidated training facility to meet its expanded training mission. We also provided relevant portions of the report to FLETA and the General Services Administration for technical comments. We incorporated technical comments from both agencies and State throughout the report, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested Members of Congress, the Secretary of State, and relevant agency heads. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-4268 or mailto:[email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contribution to this report are listed in appendix IX. We (1) evaluated how Diplomatic Security ensures the quality and appropriateness of its training, (2) examined Diplomatic Security’s training strategies for its personnel and other U.S. government personnel and how Diplomatic Security ensures that training requirements are being met, and (3) assessed the challenges that Diplomatic Security faces in meeting its training mission. To address these objectives and establish criteria, we reviewed past GAO reports on both Diplomatic Security and training, Office of Personnel Management guidance, State and other legislative and regulatory guidance and policy, and education standards and processes of established educational organizations. To understand the accreditation process to which Diplomatic Security was subject, we obtained information from a key official from Federal Law Enforcement and Training Accreditation. We also reviewed and analyzed data and documentation related to Diplomatic Security-provided training efforts, such as standard operating procedure, planning, performance, course development, course evaluation, accreditation, and career development documents; information and data on recent Diplomatic Security Training Center (DSTC)- and other Diplomatic Security-provided course offerings; and overall funding for training from 2006 to 2011. To assess the reliability of registrar data for detailing the increase in students taking high-threat courses, Diplomatic Security training budget data, and Foreign Affairs Security Training Center (FASTC) funding data, we discussed with Diplomatic Security officials the quality of the data and how they were collected, and corroborated the data by comparing them with data supplied by or interviews with other officials. We determined the data were sufficiently reliable for the purposes of this report. We interviewed officials and instructors at Diplomatic Security headquarters, several training facilities, and several overseas posts. Among others, we interviewed DSTC officials, including DSTC instructors and contractors at several training facilities and officials from all of DSTC’s divisions and branches (see app. II). We interviewed other Diplomatic Security Training Directorate officials, including officials from the Offices of Mobile Security Deployment and Antiterrorism Assistance. We also interviewed officials from the Diplomatic Courier Service. We asked a standard set of questions through in-person and videoconference interviews with Diplomatic Security agents in Afghanistan, Iraq, Pakistan, and the Washington, D.C., field office, as well as engineers and technicians in Germany, South Africa, and Florida, to get feedback from supervisors on the quality of their staff’s training and any unmet training needs. These posts and offices represent a judgmental sample selected because of their regional coverage and relatively large number of personnel compared with that of personnel at other posts and offices. We observed a wide variety of both classroom-based and exercise-based training at six Diplomatic Security training facilities in Virginia and West Virginia and viewed examples of other types of DSTC-provided learning. In addition, we interviewed officials from State’s Foreign Service Institute (FSI) to discuss their course registration and learning management systems, as well as how they coordinate with DSTC, and States’ Career Development and Assignment office on how it tracks training. We interviewed Diplomatic Security officials from a variety of offices concerning the transition in Iraq, results of the Quadrennial Diplomacy and Development Review (QDDR), and how the purchase of new security technology is coordinated with DSTC. We also interviewed officials from State headquarters and the General Services Administration to discuss the status of the project to develop a consolidated training facility. We evaluated the information we received from both documentation and interviews against the identified criteria. Our review focused on the efforts of the Training Directorate’s Office of Training and Performance Standards and to a lesser extent the Training Directorate’s Office of Mobile Security Deployment and other offices within Diplomatic Security, such as the Diplomatic Courier Service— which has called upon the expertise of DSTC to help develop its own training. Because the Training Directorate’s Office of Antiterrorism Assistance provides training to non-U.S. personnel, it fell outside the scope of our work. In addition, because we recently reviewed training provided by FSI, this report does not include an assessment of the training that Diplomatic Security personnel receive through FSI. We conducted this performance audit from June 2010 to May 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Office of Training and Performance Standards, also known as DSTC, is the primary provider of Diplomatic Security’s training. To carry out its mission, DSTC is organized into four divisions, each with three or four branches (see fig. 10). The Security and Law Enforcement Training Division consists of three branches: Domestic Training, Overseas Training, and Special Skills. The division is primarily responsible for training Diplomatic Security’s agents, investigators, and Security Protective Specialists. The division is also responsible for providing personal security training to Diplomatic Security and non-Diplomatic Security personnel posted to the high-threat environments, including the High Threat Tactical (HTT), Security for Non-traditional Operating Environment (SNOE), and Foreign Affairs Counter Threat (FACT) courses. The Security Engineering and Computer Security Division consists of three branches: Security Engineering, Technical Training, and Information Assurance. The division is primarily responsible for training Diplomatic Security’s security engineers and technicians, as well as providing information technology security awareness training to a number of U.S. departments and agencies such as the National Archives and Records Administration and the Department of Homeland Security. Instructional Systems Management ensures that the Diplomatic Security Training Center meets independent Federal Law Enforcement Training Accreditation (FLETA) standards by providing course needs analysis and course design and development for both the Security and Law Enforcement Training Division and Security Engineering and Computer Security Division, creating and posting learning tools, obtaining and analyzing student feedback, and providing instructor training. In addition, Instructional Systems Management assists other offices within Diplomatic Security, such as the Diplomatic Courier Service, with non- DSTC course development and learning tools, as needed. Administrative and Training Support Services manages the DSTC registrar and student records, coordinates with FSI, manages external training, and provides a variety of other support functions such as managing DSTC’s budget and maintaining training facilities and equipment. To ensure the quality and appropriateness of its training, Diplomatic Security relies primarily on the standards of the Federal Law Enforcement Training Accreditation process. Generally, the process involves the five steps summarized below (see fig. 11). 1. Application: An agency can apply for accreditation of a program, an academy, or both. However, a separate application must be submitted for each program and academy. In most cases, agencies first submit applications for their basic agent training and instructor training first. Once those have been accredited, the agency submits an application to have its academy accredited. 2. Agency preparation: The agency conducts a self-assessment and gap analysis to identify which of the FLETA standards it does not meet; identifies corrective steps, if necessary; and reports its results to FLETA’s Office of Accreditation. 3. FLETA assessment: FLETA carries out its assessment. The assessment teams visit training locations, review files documenting the agency’s compliance with standards, observe training, and interview administrators and trainers. If deficiencies are found during the assessment process, the agency must prepare a corrective action plan. The assessment team prepares the final report of the FLETA assessment, which is submitted to the FLETA Board Review Committee. 4. FLETA accreditation: A FLETA Board Review Committee reviews the findings before FLETA awards accreditation to the submitted course, academy, or both. Afterward, the agency provides annual updates to FLETA in order to maintain the accreditation. The updates include information that would modify the previous submissions to ensure continued compliance with current FLETA standards. 5. Reaccreditation: Reaccreditation is a fresh look at a course or academy to ensure continued compliance with the FLETA standards. Reaccreditation occurs every 3 years. The course or academy submits supporting evidence for each year since the previous accreditation. FLETA thoroughly assesses the agency’s program or academy using the FLETA guidelines and professional training standards for program and academy accreditation. For a program to receive accreditation, an agency must demonstrate that the program’s policies and procedures, facilities, and resources comply with applicable FLETA standards. In general, the academy meets the same FLETA standards as the programs, but the standards are applied to the organization as a whole. As of 2010, agencies applying for accreditation must provide evidence that at least five other programs, in addition to the basic agent training and instructor development training, comply with FLETA standards. FLETA standards are designed to describe what must be accomplished; however, it is up to each agency to determine how it will meet the standards. FLETA has one set of academy standards and four sections of program standards, which include: (1) program administration, (2) training staff, (3) training development, and (4) training delivery. Each set or section of standards has 7 to 23 individual standards. For example, 1 academy standard requires that the academy establish a vision, mission, goals, and objectives, while 1 training staff standard requires that new instructors are monitored and mentored. A FLETA Assessment Team reviews all documented administrative controls and supporting evidence submitted, including academy policies, procedures, and operations, and the team also conducts interviews with key personnel. To further support documentation, site visits are conducted at the agency’s training facilities. Live training scenarios are also observed. DSTC has gone through the accreditation process for the basic special agent and the instructor development programs and for its academy, DSTC. In 2005, DSTC opted to have the academy accredited first—an option no longer available under current FLETA standards. DSTC then sought accreditation for two programs—the basic special agent course and the instructor development course—which were accredited in 2006. In 2008, DSTC opted to have those programs and the academy go through the accreditation process simultaneously. (See table 4.) DSTC is currently undergoing reaccreditation for its programs and academy and expects that this process will be completed in 2011. Special agents are the lead operational employees of Diplomatic Security. In general, when special agents are overseas, they manage post security requirements; when they serve domestically, they conduct investigations and provide protective details. New special agents follow an entry-level career training path designed to equip them to fulfill the basic responsibilities of the job. For example, after the 3-week orientation provided by FSI, special agents go through the basic special agent course. It includes about 12 weeks at the Federal Law Enforcement Training Center and is followed by about 12 weeks of additional DSTC training. Upon assignment to an overseas post, special agents must take the basic Regional Security Officer course, the basic field firearms officer course, and the security overseas seminar. If special agents are posted to a designated high-threat post, they must also take the high-threat tactical training course. In addition, at all career levels, depending on the post, special agents may have to take language training. Once special agents are in a supervisory role, both midlevel and senior- level agents have additional required training. For example, they are required to attend FSI-provided leadership and management training. If agents are posted to a designated high-threat post at this level, they must take the HTT course if they have not taken HTT within the previous 5 years. Special agents are also required to take Regional Security Officer in- service training every 3 years, to keep up to date on current policies and procedures. In addition to following the standard special agent career path, special agents have the option of specializing in different areas—for example, in providing security protection and training or in focusing on investigations into visa and passport fraud, human trafficking, smuggling, and internal malfeasance. Each specialty has its own required training. Those opting to specialize in security protection and training can apply to join the Mobile Security Deployment Division (MSD) for a 3-year tour. When they become MSD agents, special agents receive 6 months of additional training. Similarly, those who opt to focus on investigations, becoming Assistant Regional Security Officers-Investigators, must also take additional training. Security engineers are responsible for the technical and informational security programs at diplomatic and consular posts overseas. While both SEOs and Security Technical Specialists (STS) share similar tasks at posts, SEOs are expected to be more engineering and design oriented, while STS are expected to be hands-on technicians. To become SEOs, personnel must have specific types of engineering or technical degrees. SEO training was recently restructured. Following the 3-week FSI-provided orientation, SEOs go through technical training and SEO fundamentals courses for about 107 days while assigned to a domestic office for 12 to 24 months. SEOs also go through technical surveillance countermeasures training, in addition to administrative training. If assigned to a technical security overseas position, the SEO then takes the Overseas SEO training course, which takes 25 days. During training, SEOs (if budget and resources are available) can complete a 3- or 4-week temporary duty training program at an Engineer Service Center or Engineer Service Office to get practical on- the-job experience. In addition, at all career levels, depending on the post, SEOs may have to take language training. Once SEOs achieve a supervisory role (both midlevel and senior-level positions), they are required to take additional FSI-provided leadership and management courses. SEOs at the midlevel are also required to take additional in-service training, which may include a focus on computer network and operating systems, access control systems, investigation skills, and video surveillance systems, among others. SEOs are required to take in-service training every 2-3 years, depending on the needs of the post and available resources. Security Technical Specialists are assigned throughout the world to develop, implement, and maintain technical security programs at posts overseas. As noted above, despite the different career paths, in practice their work is often similar to that of the SEOs. STS generally have a technical background. Following the 3-week FSI-provided orientation, STS are required to take technical training and STS fundamentals at DSTC. During training, STS (if budget and resources are available) can complete a 3- or 4-week temporary duty training program at an Engineer Service Center or Engineer Service Office to get practical on-the-job experience. STS also have to take FSI-provided administrative training. In addition, at all career levels, depending on the post, STS may have to take language training. Once STS achieve midlevel positions, they have additional required training. STS are required to take FSI-provided leadership and management training. In addition, STS are also required to take various in- service training that includes video surveillance, access control systems, and explosives detection, among others. This is similar to the in-service training that SEOs take. The STS career path, however, does not have senior-level positions, so STS do not take senior-level administrative, leadership, and management training. Couriers ensure the secure movement of classified U.S. government materials across international borders. The Diplomatic Courier Service is a small organization within Diplomatic Security whose members travel constantly; Diplomatic Courier Service officials noted that they had unique training challenges—particularly with regard to the travel logistics to attend training—and have taken responsibility for training their own personnel. Couriers first go through a 3-week orientation to the State Department that is identical to the FSI-provided orientation but is provided by the Diplomatic Courier Service; the new hires then undergo 3 weeks of functional introductory courier training. This is the only required course for couriers. However, the couriers also have a midlevel courier manager training course that prepares couriers for the manager position, focusing on supervisory and managerial issues. In addition, the Diplomatic Courier Service is developing its own in-service training and hub training courses. The in-service course will act as a refresher to the initial training, and the hub training would be a 1-day module on how overseas courier hubs function. No additional training is required for senior-level couriers. The Diplomatic Security Training Directorate’s three offices, including DSTC, currently use 16 facilities to accomplish their training missions (see table 5). In addition to the contact named above, Anthony Moran, Assistant Director; Thomas Costa; Anh Nguyen; David Dayton; and Daniel Elbert provided significant contributions to the work. Martin de Alteriis, Miriam Carroll Fenton, Cheron Green, Lisa Helmer, Grace Lui, and Jamilah Moon provided technical assistance and other support.
|
The Department of State's (State) Bureau of Diplomatic Security (Diplomatic Security) protects people, information, and property at over 400 locations worldwide and has experienced a large growth in its budget and personnel over the last decade. Diplomatic Security trains its workforce and others to address a variety of threats, including crime, espionage, visa and passport fraud, technological intrusions, political violence, and terrorism. To meet its training needs, Diplomatic Security relies primarily on its Diplomatic Security Training Center (DSTC). GAO was asked to examine (1) how Diplomatic Security ensures the quality and appropriateness of its training, (2) the extent to which Diplomatic Security ensures that training requirements are being met, and (3) any challenges that Diplomatic Security faces in carrying out its training mission. GAO examined compliance with accreditation processes; analyzed data and documentation related to the agency's training efforts; and interviewed officials in Washington, D.C., and five overseas posts. To ensure the quality and appropriateness of its training, Diplomatic Security primarily adheres to Federal Law Enforcement Training Accreditation (FLETA) standards, along with other standards. Diplomatic Security incorporated FLETA standards into its standard operating procedures, using a course design framework tailored for DSTC. To meet standards, DSTC also integrates both formal and informal feedback from evaluations and other sources to improve its courses. However, GAO found DSTC's systems do not have the capability to obtain feedback for some required training, including distributed learning efforts (interactive online course content). Without feedback, DSTC is less able to ensure the effectiveness of these efforts. Diplomatic Security developed career training paths for its personnel that identify the training required for selected job positions at different career levels. It uses various systems to track participation in its training, but DSTC's systems do not have the capability to track whether personnel have completed all required training. DSTC systems also are not designed to track training delivered through distributed learning. Diplomatic Security faces significant challenges to carrying out its training mission. DSTC must train Diplomatic Security personnel to perform new missions in Iraq as they take on many of the protective and security functions previously provided by the U.S. military. DSTC also faces dramatic increases in high-threat training provided to State and non-State personnel, but State does not have an action plan and time frames to manage proposed increases. These expanded training missions constrain DSTC's ability to meet training needs. In addition, many of DSTC's training facilities do not meet its training needs, a situation that hampers efficient and effective operations. To meet some of its needs, in 2007, DSTC developed an Interim Training Facility. In 2009, State allocated funds from the American Recovery and Reinvestment Act and other acts to develop a consolidated training facility; State is in the process of identifying a suitable location. GAO recommends that State enhance DSTC's course evaluation and tracking capabilities. GAO also recommends that State develop an action plan and time frames to address proposed increases in high-threat training. State reviewed a draft of this report and agreed with all of the recommendations.
|
M. tuberculosis, the bacterium that causes TB, is spread from person to person, usually through coughing, sneezing, or speaking. TB disease occurs when the bacteria actively multiply in the lungs or other sites in the body. If left untreated, a person with TB disease can spread the bacteria to an average of 10 to 15 people each year. Also, without proper treatment, TB can be fatal. Because the bacteria that cause TB are naturally slow- growing, final confirmed diagnosis of TB disease, including a determination of drug resistance, can take from 6 to 16 weeks, according to CDC. This lengthy process, along with other factors, makes diagnosis of TB difficult. (Fig. 1 provides information about the characteristics of TB.) TB disease is treated with a combination of TB medications that must be taken regularly. Individuals who have TB bacteria that are not resistant to drugs can be treated with 6 to 9 months of the most effective medications. Those with TB bacteria that are resistant to at least two of the most effective medications (multiple-drug-resistant TB) require treatment for 18 to 24 months with other TB medications that are much less effective, usually have more negative side effects, and are more expensive. Nonadherence to the drug regimen can lead to the development of drug- resistant TB, which can be transmitted from a person with active disease to an uninfected person in the same way that non-drug-resistant TB is transmitted. If a person infected with a drug-resistant strain of TB develops TB disease, his or her strain will be drug resistant as well. Because adherence to treatment regimens is essential to prevent TB bacteria from becoming resistant to available medications, individuals diagnosed with TB disease in the United States are typically treated via directly observed therapy. In such therapy, patients take their medications in the presence of a health care provider, from several times a week to every day. Individuals enrolled in directly observed therapy are more likely to complete their treatment regimens. State and local health departments and federal agencies are to work together to prevent the spread of TB in the United States. In addition to day-to-day care and treatment for patients with TB disease, state and local health departments have the primary responsibility for TB control efforts. Each state health department has a state TB controller who oversees TB prevention and control programs in the local health departments, where in most cases their workers provide care and treatment for TB patients, including directly observed therapy. State and local health departments are to work closely with staff at CDC to alert them to problems as they arise and, if necessary, request CDC assistance with nonadherent individuals with TB. Individuals with or exposed to certain diseases, including TB disease, are also subject to state and federal isolation and quarantine authorities. State and local jurisdictions have the primary legal authority to issue isolation and quarantine orders, and consequently do not regularly involve the federal government when attempting to locate individuals who are or may become nonadherent to their drug regimens. Isolation and quarantine laws vary across states; officials in some states must obtain a court order or establish that a patient is not adhering to medical advice or treatment prior to issuance of an isolation order. Furthermore, states may vary in their enforcement of such orders. However, according to state and federal health officials, the majority of TB patients adhere to treatment recommendations, including remaining in isolation units in hospitals or in isolation at home until they are no longer infectious. HHS has largely delegated to CDC the task of preventing the introduction, transmission, and spread of communicable diseases, such as infectious TB, from foreign countries into the United States, including the ability to apprehend, detain, isolate, or conditionally release a person entering the United States believed to be infected with certain communicable diseases. CDC’s overall mission is to protect the health of all Americans through health promotion, disease prevention, and preparedness. CDC’s centers, divisions, and offices also develop and disseminate guidance to state and local health departments on federal recommendations and procedures for disease control and prevention. CDC also provides resources and funding and collaborates with U.S. and Mexican health agencies for TB care and treatment for U.S. or Mexican citizens who cross the U.S.-Mexico border frequently. Within CDC, the Division of Tuberculosis Elimination is responsible for directing TB prevention and control programs in the United States, formulating national TB policies and guidelines, and helping to control TB worldwide. The Division of Tuberculosis Elimination also provides programmatic consultation, technical assistance, outbreak response assistance, and laboratory support to state and local health departments, and provides technical assistance to TB programs in other countries by collaborating with international partners. CDC’s Division of Global Migration and Quarantine (DGMQ) is responsible for working to reduce illness and death from infectious diseases, such as TB, among immigrants, refugees, international travelers, and other mobile populations that cross international borders, as well as for preventing the introduction of infectious diseases into the United States and promoting the health of people living along the U.S. borders. To facilitate this work, DGMQ operates CDC’s 20 quarantine stations at U.S. ports of entry. Quarantine station officials are responsible for assessing whether ill persons can enter the country and determining what measures should be taken to prevent the spread of infectious diseases into the United States. Most of the quarantine stations are located in airports and work closely with state and local health departments and CBP officers at nearby or collocated ports of entry. DGMQ trains CBP officers on how to identify and respond to travelers, animals, and cargo that may pose an infectious disease threat. CDC’s Coordinating Office for Terrorism Preparedness and Emergency Response works under the Assistant Secretary for Preparedness and Response in HHS and is responsible for directing and coordinating CDC’s response to public health threats. This office operates the Director’s Emergency Operations Center (DEOC), which collects information about potential public health threats 24 hours a day, 7 days a week, and is the central location for CDC’s public health response activities for specific incidents. The DEOC is responsible for sharing information with, and if necessary, requesting additional resources from HHS through its Secretary’s Operations Center (SOC) during a response to a public health incident. The SOC, managed by HHS’s Office of the Assistant Secretary for Preparedness and Response, is the focal point for synthesis of critical public health and medical information on behalf of the U.S. government. Both the SOC and the DEOC are intended to provide a formal, central point of management and oversight at their respective agencies to enable senior agency officials and subject-matter experts to take advantage of agency resources and capabilities in responding to an incident. DHS is responsible for coordinating with federal, state, local, and private entities to secure the nation, prevent terrorist attacks within the United States, and provide emergency management and planning, among other activities. According to statute, DHS is to aid HHS in the enforcement of federal quarantine rules and regulations. The Office of Health Affairs (OHA), which began operations in April 2007, serves as DHS’s principal agent for medical and health matters. It is responsible for managing DHS’s biodefense programs, ensuring the nation’s health preparedness in the event of terrorism or natural disasters, and protecting the health of DHS’s workforce. Also, TSA, CBP, and the Office of Operations Coordination operate within DHS. TSA is responsible for ensuring the security of the national transportation network while ensuring the free movement of people and commerce. TSA has responsibility for safeguarding all modes of transportation, including strengthening the security of airport perimeters and restricted airport areas; screening passengers against terrorist watch lists, such as the No Fly list; and inspecting passengers, baggage, and cargo at over 400 commercial airports nationwide. TSA is tasked with preventing a public health threat on commercial air carriers through its broad authority to protect the transportation system against any threat that could endanger individuals during travel. TSA’s Freedom Center is the primary coordination point for the federal, state, and local agencies dealing with transportation security on a daily basis. A key part of CBP’s mission is to prevent the entry of terrorists into the United States. CBP screens people, conveyances, and goods entering the United States, while facilitating the flow of legitimate trade and travel into and out of the United States. CBP’s mission also includes carrying out traditional border-related responsibilities, including narcotics interdiction, enforcing immigration and customs laws, protecting the nation’s food supply and agriculture industry from pests and diseases, and enforcing trade laws. All travelers requesting to enter the United States, including U.S. citizens, are subject to examination. Individuals may be referred for enhanced inspection for a variety of reasons, such as criminal records, inclusion on a national registry for sex offenders, or prior immigration or customs violations, or may be randomly selected. As appropriate, CBP also conducts searches of people, merchandise, and conveyances entering or exiting the United States, to ensure that merchandise may be lawfully imported or exported and duties collected. CBP officers are responsible for conducting inspections to permit admissible individuals to enter the country. In general, U.S. citizens who demonstrate their citizenship are to be admitted, although those citizens believed to be infected with or exposed to TB or other communicable diseases specified by Executive Order may be subject to isolation or quarantine immediately upon admission. Noncitizens seeking entry must establish that they are admissible under U.S. immigration law; those determined to have a communicable disease of public health significance are inadmissible, unless granted a waiver. During the inspection process, CBP officers are to use TECS—CBP’s computerized border screening and inspection system—in addition to other databases to assess admissibility and purpose for entering the country and to corroborate information. Individuals may be admitted or denied entry and returned to the country of origin. In addition, individuals may be detained temporarily pending an admissibility determination, detained for purposes of prosecuting a violation of U.S. law, or turned over to another law enforcement entity. (App. I provides more detailed information about the CBP inspection process.) In addition to electronic alerts available in databases, CBP officers also rely on be-on-the-lookout notices—which are similar to wanted posters, disseminated by CBP’s Office of Field Operations and hung at ports of entry—to identify individuals who pose potential threats attempting to enter the United States. The Commissioner’s Situation Room—CBP’s 24-hour, 7-day-a-week center for facilitating communication between CBP headquarters and the field offices—serves as the entry point for reporting of incidents from field offices. CBP also assists CDC quarantine station officials with the distribution of health risk information for the traveling public, such as notices that alert travelers to possible exposure to communicable diseases abroad and offer guidance on how to protect themselves. The DHS Office of Operations Coordination is responsible for monitoring the nation’s security on a daily basis and coordinating activities within DHS and with external entities, such as governors’ offices and law enforcement partners. Within the Office of Operations Coordination, the National Operations Center (NOC) serves as the focal point for these coordination efforts by collecting information about potential homeland security threats 24 hours a day, 7 days a week. The NOC serves as the primary hub for federal emergency and public health preparedness and response by combining and sharing information, communications, and operations coordination pertaining to the prevention of terrorist attacks and domestic emergency management with other federal, state, local, tribal, and nongovernmental emergency operations centers, including TSA’s Freedom Center and CBP’s Commissioner’s Situation Room. In October 2005, HHS and DHS signed a memorandum of understanding that was intended to provide a basis for federal cooperation to enhance the nation’s preparedness to prevent the introduction, transmission, and spread of quarantinable and serious communicable diseases, such as TB, from foreign countries into the United States. According to CBP officials, the memorandum was developed following the 2003 outbreak of severe acute respiratory syndrome (SARS) in order to prepare the departments for circumstances that would need a coordinated response. CDC is the designated agency with responsibility for HHS activities supported by the memorandum. CBP, Coast Guard, and Immigration and Customs Enforcement are the designated DHS agencies with responsibility for assisting CDC in the enforcement of isolation and quarantine authorities. Two TB incidents occurred in spring 2007. One involved a U.S. citizen who traveled by commercial airline internationally and subsequently reentered the United States at the Canadian border at the Champlain, New York, land port of entry. The other involved a Mexican citizen who crossed the U.S.-Mexico border multiple times at the El Paso, Texas, land port of entry. In both incidents, according to HHS, the individuals with TB did not follow the medical advice of federal, state, and local public health officials and instead continued to travel. In the incident involving the U.S. citizen, state and local health officials reported that once they determined that the U.S. citizen posed a public health threat, they orally recommended to him that he not travel and reviewed options to restrict his international travel. State and local health officials reported that from May 11 to May 13, they attempted to hand deliver a letter to the individual that emphasized the seriousness of drug- resistant TB and the potential threat he posed to others, and included a recommendation that he postpone his travel. However, according to CDC officials, state and local health officials reported that they were unable to deliver the letter because, unbeknownst to them, the individual had left the United States 2 days earlier than he had previously planned, despite advice not to travel. When federal public health officials became involved in the response, they contacted the individual overseas and made efforts to advise him about seeking treatment and how to return to the United States. Once CDC notified CBP of the incident, CBP entered an alert in TECS that provided instructions to detain the individual if he was encountered at any port of entry. However, HHS reported that the individual continued with his travel plans against medical advice. For example, when a CDC quarantine officer located the individual abroad and attempted to direct him to treatment in Europe, the individual changed his travel plans again, left his hotel, and did not contact CDC until he returned to the United States. Upon his return, according to HHS, CDC was able to contact him via cell phone and he agreed to undergo treatment for drug- resistant TB. (Fig. 2 provides more details about the incident involving the U.S. citizen and officials’ actions.) databases. Despite multiple searches by CBP, he was checked at the border approximately 20 times during April and May 2007, and was able to cross into the United States. According to officials from both agencies, the Mexican citizen did not turn over his visa when his physician initially requested it, which would have allowed CDC and CBP officials to locate information about him. On May 31, approximately a month after state and local health officials first notified federal officials of the incident, the Mexican citizen gave his visa to his physician. (Fig. 3 provides more details about the incident involving the Mexican citizen and officials’ actions.) Various factors—a lack of comprehensive procedures for information sharing and coordination as well as border inspection shortfalls—hindered the federal response to the two TB incidents. HHS and DHS lacked formal procedures for sharing information with each other. They had established a memorandum of understanding in October 2005 creating a broad agreement to communicate and coordinate during public health emergencies. However, the departments were unable to carry out the intent of the memorandum because they had not developed specific operational procedures to share information and coordinate their efforts to respond to events such as the two TB incidents. In addition, HHS had general procedures for sharing information about incidents of infectious diseases among senior managers at HHS and DHS through the agencies’ operations centers. However, HHS and CDC did not have procedures that outlined what assistance was available to them from DHS, particularly from CBP and TSA, and how to request it. The two departments also lacked internal procedures outlining how to share information and coordinate with senior officials within each department about the TB incidents to involve them in decision making, which resulted in senior officials not being able to ensure that resources were available to take appropriate action. In addition, CDC had not developed procedures to inform state and local health officials about the process for coordinating with CDC to determine whether federal isolation and quarantine authorities should be used to deter the travel of an individual with TB, causing the initial delay in the federal response. Furthermore, CBP had deficiencies in its traveler inspection process, which led to further delays in locating the individuals and deterring their travel. Despite the memorandum of understanding between HHS and DHS in place at the time of the incidents, the departments lacked comprehensive procedures needed to share information with each other and coordinate resources to deter cross-border travel of nonadherent individuals with infectious disease, such as TB. Our previous work has identified practices to enhance and sustain agency collaboration, including frequent communication among the agencies and the establishment of compatible policies, procedures, and other means of operating across agency boundaries. Additionally, Standards for Internal Control in the Federal Government calls for (1) management to ensure that there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals and (2) effective communication flowing down, across, and up the organization to enable managers to carry out their internal control responsibilities. Finally, our work on emergency management outlines three basic elements that constitute effective preparedness and response to hazardous situations, including the spread of infectious diseases. The three basic elements are (1) leadership, where clear roles and responsibilities are effectively communicated and understood in order to facilitate rapid and effective decision making; (2) capabilities, for which plans are integrated and key players define what needs to be done, where, by whom, and how well; and (3) accountability, where officials work to ensure that resources are used appropriately for valid purposes, including developing operational plans that are tested and taking corrective action as needed. Although the memorandum of understanding outlined a broad agreement to promote information sharing in the event of a public health incident, it did not provide specific operational procedures for the departments and their component agencies to share information with each other to respond to events such as the two TB incidents. In addition, HHS had general procedures for senior managers to share information about infectious diseases with senior DHS officials through their operations centers. However, we learned through discussions with DHS officials and from the HHS and CDC after-action reports that during the incident involving the U.S. citizen, HHS and CDC did not have procedures outlining what assistance was available from DHS, particularly from CBP and TSA, and how to request it. Some of the DHS capabilities that were unclear to HHS and CDC decision makers included CBP’s search capabilities for locating individuals and their travel itineraries, their travel histories, or both in order to stop cross-border travel; the availability of TECS and be-on-the-lookout notices through CBP, which could have assisted officers in identifying the individuals so that they could locate them at any U.S. port of entry; and TSA’s ability to prevent the individuals from flying into and out of the United States. Because CDC was unsure whether or how DHS could offer assistance for public health purposes, CDC did not request assistance from CBP until 4 days after state health department officials notified CDC of the incident. HHS and DHS also lacked procedures for sharing individual health information between the departments for public health incident response, including how broadly to share it, which delayed the federal response to the incidents. CDC and DHS officials we interviewed said that CDC was initially slow to provide this identifying information to TSA officials while the agencies were determining a course of action and whether TSA’s No Fly list could be used to prevent the U.S. citizen’s air travel, thus hindering their ability to locate and deter the individual from traveling. Public health and law enforcement authorities generally have different approaches to sharing such information, as reflected in their missions and responsibilities. According to CDC officials, in an effort to limit disclosure of individuals’ private medical information, agency staff generally refrain from sharing identifying information with each other, even when discussing a potential incident, preferring to refer to people and places as “the patient” or “hospital A.” On the other hand, CBP and TSA, as a law enforcement and security agency, respectively, need accurate and complete identifying information to locate and detain individuals. In the incident involving the U.S. citizen, CDC officials took several hours to provide the person’s name and health information after initially contacting DHS for assistance because they were unsure how the information was going to be used and protected. CDC’s hesitancy delayed CBP’s dissemination of a be-on-the-lookout notice and placement of an alert in TECS. CDC officials indicated that generalized concerns over the applicability of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and Privacy Act restrictions on the sharing of individual information contributed to a delay in their sharing this information with DHS. However, as CDC has concluded, in this instance both laws appear to permit the disclosure to DHS, without patient authorization, of individually identifiable health information acquired for the purpose of controlling the spread of disease. According to CDC, there was a concern that the lack of procedures for sharing identifying and health information between agencies resulted in this information being disseminated over law enforcement channels more broadly than may have been necessary under the circumstances. In addition, concerns were raised that password protection for the information disseminated may have been insufficient. Along with a lack of comprehensive procedures for information sharing with each other, HHS and DHS lacked specific procedures for communicating across their respective component agencies about public health incidents, which contributed to uncertainty about whether and when CDC, TSA, or CBP should notify senior officials at HHS or DHS about potential incidents. According to Standards for Internal Control in the Federal Government, effective communication should occur in a broad sense with information flowing down, across, and up organizations. Lacking specific procedures, HHS and CDC officials used a “standard of reasonableness” that involves professional discretion as a basis for determining whether the individual posed a potential public health threat and when to notify senior officials. CDC officials told us that using this standard involves some subjective judgment. According to CDC, its quarantine station officials initially believed that the two TB incidents could be resolved locally without notifying senior officials, which led to delays in the federal response in both incidents. For example, in the U.S. citizen incident, senior officials at HHS and CDC were not notified by CDC quarantine station officials at the field office level about the incident early enough to ensure timely use of federal isolation and quarantine authorities to deter his travel. In addition, CBP and TSA lacked written procedures for internal communication regarding how to handle public health incidents and when to notify DHS senior officials about the efforts of officials in the field to respond to requests from CDC quarantine station officials. During this incident, CBP officials at the air port of entry became involved on May 22, but they did not notify DHS senior officials until May 24. In the incident involving the Mexican citizen, CBP officials at the land port of entry did not notify DHS senior officials until 14 days (April 16 to April 30) after CDC requested CBP assistance. CDC had not developed procedures to inform state and local health officials about the process for coordinating with CDC to determine whether federal isolation and quarantine authorities should be used to deter the travel of an individual with TB, causing the initial delay in the federal response. Although some information on federal isolation and quarantine authorities was available on CDC’s Web site, guidance on the process by which state and local health officials were to obtain federal assistance had not been developed. As a result, state and local health officials responding to the incident involving the U.S. citizen were uncertain how to request federal assistance and, prior to doing so, attempted but failed to contact the individual to deter him from traveling, ultimately contributing to the delay in the federal response. Eight days (May 10 to May 18) elapsed from when a state health department official discussed options for restricting the U.S. citizen’s international travel with a CDC quarantine station official, without confirming that a specific individual intended to travel, to when the state requested formal assistance from CDC. Officials from an association representing state and local health officials and CDC officials stated that many state and local health officials are not aware of federal isolation and quarantine authorities and how they are to be implemented and enforced. CDC is preparing further guidance to clarify the implementation and enforcement of these authorities. Deficiencies in CBP’s traveler inspection operations further contributed to the delay in federal efforts to locate the two individuals with TB and direct them to treatment. When responding to HHS’s request for assistance to deter the U.S. citizen from traveling, CBP issued a TECS alert to determine when the U.S. citizen planned to return to the United States. When he crossed the border at a land port of entry after having flown into Canada, the CBP officer queried the individual’s travel documents in TECS to check against law enforcement systems for outstanding warrants, or criminal or administrative violations, and to assist with determining admissibility into the United States. However, the officer ignored the electronic alert and instructions to refer the individual for further inspection, in violation of CBP procedures. Instead, the CBP officer cleared the TECS alert and allowed the individual to enter the country without the required further inspection. When responding to the incident involving the Mexican citizen, CDC and CBP officials did not know that they had received incomplete or inaccurate biographic information or both. As a result, at the time of the incidents, a TECS database search would not prompt a “match” if incomplete or inaccurate biographic information was used for a query. According to CBP officials, incomplete and inaccurate information delayed the identification of the individual by over 1 month and allowed him to travel into the United States approximately 20 times after CDC first notified CBP to look for and deter him. According to CBP officials, they realized within a day of initiating the TECS searches that the identifying information was incomplete because the searches did not produce a travel history, which typically shows an individual’s travel in and out of the United States. Also, the searches of visa databases, which could have provided more information about his identity, did not produce any information on the individual, who was said to be a frequent traveler. Once CBP officers realized that the Mexican citizen’s identifying information was incomplete, they contacted CDC the next day to confirm the identifying information and told CDC officials that they suspected that the information was incomplete. According to agency officials, 4 days after CDC first notified CBP about the Mexican citizen, CDC notified CBP that some of the biographic information from the Mexican citizen’s medical records was inaccurate. Using corrected information, CBP immediately revised the TECS alert and the local be-on- the-lookout notice; however, when a new TECS search still did not produce information, CBP contacted CDC. Although CDC had made attempts, it did not obtain accurate and complete biographic information. On May 31, about 6 weeks after CDC first contacted CBP officials, the Mexican citizen gave his border-crossing card, a type of visa, to his physician. CDC was then able to provide CBP with the complete and accurate biographic information, and DHS took possession of his card, thus preventing further crossing. With the accurate information from the Mexican citizen’s documents, DHS officials located his travel history in TECS on May 31, determined that he had crossed the southern border 21 times from April 16 through May 31, and entered an accurate alert in TECS. HHS and DHS have implemented various procedures and tools intended to address deficiencies identified by the 2007 TB incidents. However, CBP has not implemented TECS modifications that might further help officers identify individuals who have been diagnosed with TB at ports of entry. In addition, CDC has not yet to completed efforts to inform state and local health officials about the existence of the new procedures and tools or how to successfully use them in order to facilitate requesting federal assistance and ensure that new procedures and tools are used appropriately. Finally, HHS and DHS have identified additional actions that need to be taken to further strengthen the departments’ ability to respond to incidents involving individuals with TB who intend to travel. However, as of September 2008, HHS and DHS had not finalized plans for completing these actions. HHS and DHS officials—including officials from CDC, CBP, and TSA—met in June 2007 to develop new procedures and tools to determine how DHS might be able to help HHS respond to public health incidents, develop a framework for coordinating with each other during responses to public health incidents, and ensure the appropriate level of agency involvement and use of agency resources. To help promote enhanced information sharing across and within both departments, HHS and DHS developed new procedures for HHS to request assistance from DHS. These new procedures are consistent with practices identified in our past work for enhancing and sustaining agency collaboration and for establishing leadership, capabilities, and accountability for preparedness and response. They are also consistent with Standards for Internal Control in the Federal Government, which calls for management to ensure that there are adequate means of communicating internally and with external stakeholders. Under the new procedures, HHS officials at field offices, such as quarantine stations and ports of entry, are to notify headquarters officials when a TB or other public health incident develops, whereupon these officials are to make requests to DHS headquarters to task TSA and CBP officials at ports of entry with taking action to interdict individuals with TB and other infectious diseases at the borders. HHS prepares written requests for assistance that include the information DHS needs to respond, such as the individual’s name, date of birth, and action to be taken if the individual is encountered. DHS and HHS have also included safeguards designed to ensure the privacy of the individual in the request for assistance process. The request for assistance form is received only by appropriate HHS and DHS officials responsible for responding to and completing requests, and officials from both departments send the written requests via e-mail, as password-protected documents. CDC and DHS officials said that the new procedures for information sharing are also intended to allow the agencies to take advantage of existing procedures, resources, and capabilities while maintaining the close professional relationships between CDC and CBP officers at ports of entry. DHS, particularly TSA and CBP, has also worked with HHS, particularly CDC, to implement new tools intended to deter the cross-border travel of individuals with infectious TB. Specifically, TSA modified an existing tool—the No Fly list—to create a Do Not Board list for infectious air travelers who are nonadherent with treatment and intend to travel. The Do Not Board list is a roster of individuals whom CDC requests be denied boarding onto a commercial airline flight into, out of, or within the United States because they pose a potential public health threat to passengers, air carriers, or the transportation system. CDC’s criteria for placement of an individual on the Do Not Board list include public health officials’ belief that (1) the individual has an a communicable disease that would constitute a public health threat if he or she were allowed to travel by airplane; (2) the individual is unaware of, or will become nonadherent to, public health recommendations regarding treatment or other instructions; and (3) the individual intends to travel by airplane. According to CDC officials, the agency requests removal of an individual from the list when state or local health officials confirm that the individual has undergone sufficient treatment to be determined noninfectious. HHS officials said that the list is reviewed at least monthly. TSA maintains the Do Not Board list, which is separate from other watch lists for air carriers, such as the No Fly list used to prevent known terrorists from boarding airplanes, but functions in a similar manner. TSA sends the Do Not Board list to domestic and foreign air carriers on a daily basis as an addendum to the No Fly list. U.S. air carriers are to screen all passengers against the Do Not Board list (regardless of the flight’s origination or destination). International carriers are to screen passengers who are arriving in or departing from the United States but not passengers traveling outside the United States. HHS and DHS officials said they believe that the request for assistance process and the Do Not Board list could be used to address travelers with other infectious diseases, though CDC officials said the most likely use would be for travelers with infectious TB. Although the Do Not Board list was created in response to the incident involving the U.S. citizen, officials said that individuals with infectious diseases other than TB, such as measles, SARS, or a strain of influenza with pandemic potential, could be placed on the Do Not Board list if they met the criteria. Generally, CDC expects that it could use the new procedures and tools in instances where health officials have identified infectious individuals who have a substantial risk to expose others and there is a strong belief by health officials that an infected individual intends to travel. However, according to CDC officials, the use of the Do Not Board list to prevent travel by individuals with other infectious diseases would be less likely because they would become ill more quickly and feel too unwell to travel, be more visibly ill, and recover more quickly than individuals with TB. In addition, CDC officials said that the Do Not Board list requires careful review of individual cases. In the event of a large disease outbreak, CDC’s ability to look at individual cases to place them on the Do Not Board list would be limited, officials said. CBP also created and implemented a new TECS public health alert (1 week after the U.S. citizen reentered the country) to help ensure that DHS is able to assist CDC in locating individuals with infectious diseases, including TB, who are attempting to enter the United States. According to CBP officials, prior to the TB incidents, TECS public health alerts were indistinguishable from other types of alerts and information on how to manage an individual with infectious disease, including TB, was not prominently displayed in the alert. Now, when CDC requests CBP assistance for individuals who intend to travel against medical advice, if the individual’s license, passport, visa, or other identifying document or biographical information is scanned or manually entered into TECS, the new TECS public health alert is displayed prominently on the CBP officer’s computer screen, with specific instructions for the officer to isolate the individual and contact CDC immediately. As with the Do Not Board list, federal officials must know an individual has an infectious disease, including TB, to place a public health alert in TECS. Furthermore, according to CBP officials, if the identifying information provided to physicians or recorded in health records does not match the information entered in visa databases, visas and other travel documents generated from these databases will not produce a match when queried and CBP officers will not know to detain the individual, as in the case involving the Mexican citizen. Furthermore, if an individual’s information (passport or visa) is not scanned or manually entered into TECS when he or she enters the United States, officers will not discover the public health alert and will not detain the individual. CBP also took other actions to strengthen TECS computer screening mechanisms and search capabilities for public health alerts. These changes were intended to ensure that CBP officers at ports of entry adhere to agency protocols and instructions for all TECS alerts, either public health or otherwise. At the time of the incident involving the U.S. citizen, the CBP officer who admitted the individual into the country was able to bypass the requirement to refer individuals for further inspection because there was no supervisory review. According to CBP officials, to prevent this, CBP upgraded TECS computer programming so that all TECS public health alert matches are automatically sent to terminals where referrals receive supervisory review intended to ensure that individuals receive the required additional inspection and referral to CDC. With this change, officers are no longer able to override the public health alert in TECS without first diverting the individual for further screening. The public health alert can only be overridden in TECS once the individual has cleared the more detailed inspection (called secondary inspection). In addition, CBP enhanced computer search capabilities for TECS public health alerts. According to CBP officials, in the incident involving the Mexican citizen, the officer who entered the TECS alert did not use varying combinations of the biographic information during his search because he believed that the information CDC provided was accurate. According to CBP officials, as of May 2008, when a public health alert is entered into TECS, the system is now programmed to create multiple public health alerts on variations of specific types of the biographic information entered. However, CBP officials told us that the TECS programming changes do not create variations on other combinations of an individual’s available biographic information. A CBP official told us that CBP could further modify TECS to create public health alerts using different combinations of other available biographic information, but CBP had not explored the feasibility of making this change and had not examined whether the benefits of conducting these additional searches on other types of biographic information offset the cost of a possible increase in the time needed to process individuals through busy ports of entry. According to CBP, a slight increase in the time needed to conduct inspections, especially at land ports of entry, can result in substantial traveler delays and traffic congestion. Nonetheless, without exploring whether the costs of conducting searches on these other combinations of biographic information exceed the benefits, DHS may be missing an opportunity to enhance its ability to detect persons with known cases of infectious disease and deter them from entering the United States. These changes to TECS notwithstanding, CBP’s ability to identify individuals who are the subject of public health alerts—and ultimately deter their cross-border travel—largely depends on CBP officers’ compliance with prescribed inspection procedures. In November 2007, we reported on weaknesses in inspection procedures at U.S. ports of entry. We said that CBP had taken action to address weaknesses in 2006 inspection procedures, such as not verifying the citizenship and admissibility of each traveler, that contributed to failed inspections. However, our follow-up work conducted months after CBP’s actions showed that weaknesses still existed. In July 2007, CBP issued detailed procedures for conducting inspections, including requiring field office managers to assess compliance with these procedures. However, CBP had not established an internal control to ensure that field office managers share their assessments with CBP headquarters to help ensure that the new procedures are consistently implemented across all ports of entry and reduce the risk of failed traveler inspections. We recommended that CBP implement internal controls to help ensure that field office directors communicate to agency management the results of their monitoring and assessment efforts so that agencywide results can be analyzed and necessary actions taken to ensure that new traveler inspection procedures are carried out in a consistent way across all ports of entry. CBP agreed with our recommendation and stated that it has begun to take action to address it. A CBP official told us that CBP intends to finalize the results of field office assessments in October 2008. Figure 4 shows the flow of requests for assistance from HHS to DHS, together with the steps each agency takes to prepare, submit, and complete these requests. Step-by-step procedures for each agency are explained in table 1. The departments and their component agencies were able to test how the new procedures worked in practice because information provided by HHS for the period May 2007 to February 2008 showed that HHS coordinated with DHS to request assistance for 72 actions to place individuals on, or remove them from, the Do Not Board list, or to place or remove public health alerts in TECS. Of these 72 requests, 21 were to add an individual to the Do Not Board list. Table 2 shows the number of requests for assistance CDC prepared for HHS to submit to DHS by type of request in this period. All requests were for individuals with TB disease who fit the criteria jointly established by CDC and DHS. In reviewing these requests for assistance, we found that actions were typically completed within 24 hours of the time CDC initiated the request. According to DHS officials, all requests were considered high priority and were addressed. We also determined that CDC’s requested assistance complied with its criteria and included CDC contact information and detailed instructions, such as how CBP officers should protect themselves and others if they encounter the individual. Although CDC has made some efforts to educate health officials, according to CDC officials the agency has not yet completed all actions to provide information to health officials who work with individuals with TB about the new procedures and tools, or about the criteria for adding individuals to or removing them from the Do Not Board list or TECS. For example, CDC has presented information on the Do Not Board list at various conferences and association meetings, such as the June 2008 meeting of the state epidemiologists association and the November 2007 meeting of its advisory council for TB elimination. Additionally, CDC has used the Morbidity and Mortality Weekly Report—a publication CDC makes available on its Web site at no charge—to provide state and local officials with information about the criteria for placement on or removal from the Do Not Board list or TECS. The article describing the criteria was published in a September 2008 issue. However, other CDC actions to inform state and local officials have yet to be completed. CDC plans to publish a companion product to the Morbidity and Mortality Weekly Report article, which would consist of a letter notifying officials of the publication and a guidance document describing the new tools and procedures that would be sent via e-mail to state and local health officials. According to CDC officials, the companion product will also be posted on CDC’s Web site, and CDC will host Web-based seminars for state and local TB programs. According to health officials, HHS requests for DHS assistance to deter individuals with TB from traveling originate primarily with state and local health officials, such as TB controllers, state and local health department staff, and public and private physicians, who typically have primary contact with individuals with TB and are more likely to be aware that an individual might be planning to travel. Knowledge of the new procedures and tools among these officials could prevent delays in accessing federal assistance, as occurred with the U.S. citizen. According to CDC officials, some health officials should already be familiar with the new procedures because a number of them helped CDC develop the criteria to determine whether an individual with TB should be removed from the Do Not Board list or TECS. Furthermore, CDC officials said they believe that state and local health department officials should be aware of the changes because of CDC’s close relationships with their professional associations. These associations have a role in promoting national policy and serving as liaisons between local, state, and territorial and federal health departments. However, an official with one such association said that staff independently discovered the new procedures and tools, while staff from another association told us that they were not aware of them. Additionally, information about the new procedures and tools may be especially important for those states with lower relative numbers of TB cases, which may have less experience in accessing federal assistance. Moreover, providing information about the criteria for new procedures and tools can help ensure that state and local health officials can use them appropriately. For example, in one case, an individual with TB who had been added to the Do Not Board list presented a letter from county health officials to airline staff stating that he no longer posed a health risk to other travelers. Because county health officials did not follow the correct procedure to notify CDC and request the individual’s removal from the Do Not Board list, he was not allowed to board his flight. As of September 2008, the two departments had not finalized plans for completing additional actions they identified that are intended to further strengthen their ability to respond to incidents involving individuals with TB who intend to travel. HHS and DHS officials told us that this was because their proposals for the additional work were undergoing internal department review, required implementation over time, or required further coordination with other departments and their component agencies. It is unclear how much additional work is needed because the departments did not have detailed plans and time frames for completing these actions. Without these plans and time frames, HHS and DHS will not have fulfilled the actions they identified as necessary to strengthen their ability to respond to and prevent the cross-border travel of individuals with infectious TB. HHS and DHS officials said that they planned to meet in the fall of 2008 to further address the additional actions that need to be taken. Examples of some incomplete actions that require cross-agency coordination include the following: HHS, in conjunction with CDC and DHS, plans to develop a training module for its personnel to increase awareness of existing agency capabilities, available resources, procedures for requesting assistance, and communication protocols, according to the department’s after-action report on the U.S. citizen incident. HHS officials said that while the agency may have specific procedures in place, they may be applied inconsistently if officials in field offices are unaware of them. However, these officials did not specify how they would coordinate with CDC and DHS to finalize plans to develop or conduct the training. CDC recommended that DGMQ, which operates the quarantine stations at ports of entry, provide training and materials on infection control for communicable diseases to CBP officers stationed at the ports of entry. Specifically, DGMQ planned to give CBP officers small cards with information on the use of personal protective equipment and procedures for isolating individuals with suspected or confirmed infectious diseases at ports of entry, to accompany officers’ personnel badges. However, according to DGMQ officials, CDC’s progress on this recommendation was delayed because of several factors, including the need to negotiate with the CBP officers’ union, which DGMQ did not foresee. DGMQ officials told us that they had coordinated with the CBP officers’ union, but they did not have a specific date for when they planned to issue the cards, which are still under agency review. CDC is collaborating with the Department of State and other agencies, that are developing policies and procedures for using federal resources to assist in transporting citizens and legal residents involved in a public health incident abroad back to the United States. In the incident involving the U.S. citizen, CDC did not use its plane to fly the individual from Europe to the United States because the agency did not want to expose the crew and any other passengers to TB. According to CDC, while the agency worked to develop alternate suggestions for travel or medical care for the U.S. citizen overseas, he once again traveled against medical advice. CDC officials we spoke with said that the agency was in the process of equipping the CDC plane with appropriate medical equipment to transport individuals with infectious respiratory diseases. However, officials said that activities related to the transport of U.S. citizens back into the country require continued coordination with the Department of State, which has primary responsibility for assisting U.S. citizens abroad, and the Department of Defense, which has appropriate medical equipment available. According to DHS officials, HHS and DHS need to further examine issues related to ensuring that the distribution of personal and medical information of individuals with communicable diseases who pose potential public health threats is limited to protect privacy, while at the same time conducting the necessary public health and law enforcement activities to deter their travel and direct them to treatment. Officials from both departments told us that they are concerned that a perceived lack of procedures for safeguarding personal information could provide a disincentive for an individual both to disclose his or her illness and to seek treatment. DHS has recommended convening subject-matter experts in patients’ rights and the rights of the public to be protected from potential exposure to infectious diseases to determine appropriate procedures for law enforcement officers who assist HHS in locating nonadherent individuals. DHS officials said that the chief privacy officers for HHS and DHS have begun to work together to address this issue. According to CDC officials, both departments have activities under way to assess the effectiveness of the new procedures and tools. Specifically, they plan to conduct performance monitoring of the new request for assistance procedures and tools, discuss how information sharing and coordination could be further improved, and develop an annual report based on after- action reports that analyzes trends and identifies potential improvements in agency response. In addition, both departments are evaluating the new procedures and tools based on TB incidents as they arise. According to CDC officials, the agency is conducting some performance monitoring of the new procedures and tools, such as tracking the number of individuals who are being placed on and removed from the Do Not Board list and the time lapse between when HHS submits a request for assistance to DHS and when DHS completes the request. CDC officials review this information during monthly staff meetings to identify areas for improvement. In addition, CDC officials said that the request for assistance procedures would be included as part of a measure that will be monitored by its Division of Emergency Operations. This division regularly monitors about 60 protocols for operations at any one time to find ways to improve the performance of the protocols. CDC officials also stated that they plan to implement CDC’s secure data network to transmit written requests for assistance between the departments, as opposed to the current method of e-mailing requests as password-protected documents, to improve security and decrease processing time. According to HHS and DHS officials, they communicate on a monthly and weekly basis to discuss changes made to procedures and tools as a result of the 2007 TB incidents and their continued applicability to responding to TB cases, as well as issues related to information sharing for responding to such cases. For example, these officials reported that in addition to the initial June 2007 meeting, they hold in-person monthly meetings to help officials refine the new procedures and tools as necessary to better address potential limitations in future incident response. For example, during these meetings, officials discuss what information DHS needs to complete an HHS request for assistance to ensure that the appropriate action is taken. Officials said that they also use these meetings as an opportunity to discuss the differences in the approaches CDC, TSA, and CBP officials have toward public health incidents, such as the agencies’ practices for sharing identifying information. Officials from HHS, CDC, and DHS’s OHA also reported that they communicate by phone and e-mail several times a week to discuss the status of current requests for assistance and other public health issues that may require DHS assistance. According to CDC and DHS officials, this informal and frequent contact encourages information sharing across the departments and their component agencies, allowing them to better understand and effectively address issues. CDC officials said that they plan to develop an annual compilation report analyzing all after-action reports, including those for TB, that were completed in the previous year. Analysis of these reports, which is to generally include summaries of the events and observations for improvement, allow CDC officials to identify trends, review progress over time, and determine recommendations for broad agency improvement for future public health response. CDC plans to issue the first annual compilation report for those after-action reports completed in 2008, but has not set a target date for issuance. As of September 2008, CDC officials told us that the first compilation report would not include the incident involving the U.S. citizen, and would only include those incidents occurring after August 2008. According to HHS and DHS officials, they are using the departments’ responses to subsequent TB cases as opportunities to revise the new procedures and tools and develop skills to help enhance their response to future TB incidents. Internal control standards for the federal government call for agencies to assess the quality of performance over time so that deficiencies can be identified and addressed. CDC and DHS officials said that they view each use of the request for assistance procedures and tools as a “natural exercise” that provides an opportunity to identify areas for improvement and refine the procedures and tools as necessary. For example, according to DHS officials, CDC officials responded to DHS feedback by increasing the level of detail about the medical condition of the individual included on requests submitted to DHS while simultaneously increasing the privacy protections of the identifying information provided on the forms. Also, after subsequent incidents, CDC officials determined that it was necessary to specify which agency officials should participate in the conference calls that include CDC, state, and local officials to determine whether an individual with an infectious disease, such as TB, who intended to travel justified a need to request assistance from DHS. According to HHS officials, the agency’s coordination with DHS for more than 70 requests for assistance since the 2007 TB incidents also has helped agency officials become familiar with their roles in the information-sharing process that is outlined in the new procedures. The new procedures and tools that HHS and DHS established in the wake of the spring 2007 incidents involving the two individuals with drug- resistant TB have improved federal interagency information sharing and coordination for responding to TB incidents and could lay the foundation for continuing improvement in responding to future TB incidents. In addition, as a result of the collaboration between HHS and DHS in making these changes, each department now has a clearer view of how the other’s mission and approach to public health incidents differs from its own, which could further enhance their ability to collaborate in responding not only to similar TB incidents but also to other future public health threats. Despite DHS’s progress in enhancing TECS so that CBP officials can better identify individuals via electronic public health alerts, this enhancement is applicable only for some types of biographic information, but not others. Not exploring the costs and benefits of further modifying TECS to create public health alerts based on variations of additional types of biographic information may result in missed opportunities to locate persons subject to public health alerts and deter them from entering the United States. Additionally, HHS and DHS have more opportunities to improve their information-sharing efforts in responding to future TB incidents. For example, unless state and local health officials are informed and educated about the new tools and procedures, delays in accessing federal assistance, like those encountered during the two TB incidents, could persist. Specifically, without wide dissemination of the procedures for placing individuals with TB on, or removing them from, the Do Not Board list, or for placing or removing a public health alert in TECS, state and local health officials may not be aware of the federal assistance at their disposal for use in locating individuals with TB who are nonadherent with treatment and may intend to travel against medical advice. Additionally, state and local health officials who have limited knowledge of these changes and no previous experience in working with federal officials at the field office level may encounter difficulties in using the new procedures and tools. Furthermore, HHS and DHS have identified additional actions that they need to take to further strengthen their ability to respond to incidents involving individuals with TB who intend to travel, including some actions that require cross-agency coordination for completion. However, the departments have not developed an action plan for ensuring that these multiagency efforts are accomplished. Absent a clear plan with associated time frames for completing cross-agency actions, the departments may not be accountable for taking the corrective actions and ensuring that all identified deficiencies are mitigated. To ensure continuing improvements in HHS’s and DHS’s new procedures and tools developed in response to the 2007 TB incidents and to improve awareness of these changes, we are making the following three recommendations. We recommend that the Secretary of DHS direct CBP to determine whether the benefits exceed the costs of enhancing TECS capabilities when creating public health alerts to include variations on other types of biographic information that could further enhance its ability to locate individuals who are subject to public health alerts and, if so, to implement this enhancement. We also recommend that the Secretary of HHS and the Secretary of DHS work together to continue to inform and educate state and local health officials about the new procedures and tools and develop plans with time frames for completing additional actions that require cross-agency coordination to respond to future TB incidents. We requested comments on a draft of this report from HHS and DHS. Both departments provided written comments in letters dated September 24, 2008, and September 30, 2008, respectively, which are summarized below and reprinted in appendixes II and III. HHS and DHS generally agreed with our recommendations. With regard to our first recommendation on enhancing TECS capabilities to include variations on other types of biographic information, DHS said that CBP has completed a cost-benefit analysis and determined that this enhancement would increase to an unmanageable level the number of possible alerts requiring further research by CBP officers and increase delays at ports of entry. However, in response to our recommendation, CBP is drafting a policy and new procedures that when implemented will require that officers (1) review an individual’s biographic information when entering public health alerts to determine whether variations on this information could produce an accurate public health alert and, if so, (2) create a new public health alert based on the variation of this biographic information. CBP believes that this approach will enhance capabilities without causing delays, although we believe that it will be important to monitor implementation to ensure that the approach provides the intended results. With regard to our second recommendation, HHS and DHS stated that they were working together on efforts that, once completed, will help to ensure that state and local health officials are better informed about the new procedures and tools. Finally, HHS and DHS stated that they were working to address our third recommendation to develop plans with time frames for completing the remaining actions that require cross-agency coordination, but did not address whether they were developing plans with time frames for completing the other remaining additional actions. We believe that absent these plans, there is no guarantee the departments will complete these actions that are important for ensuring full cross- agency coordination in response to future TB and other public health incidents. In commenting on a draft of this report, HHS stated that it disagreed with our assessment of “the lack of agency coordination.” However, we found that following the incidents HHS and DHS had identified coordination deficiencies in their responses, which they deemed serious enough to require the development of new procedures and tools. DHS also raised two issues regarding our findings related to CBP. First, DHS noted that CBP field locations often receive and handle requests from CDC regarding individuals with communicable diseases and that CBP officials at the time handled the incident involving the Mexican citizen at the local level according to existing protocols. Second, CBP wished to clarify that although procedures have been “fine-tuned” since the incident occurred, CBP believes that the procedures in place at the time of the incidents were comprehensive. We maintain that the fact that CBP created new standard operating procedures for communicating with HHS and for restricting international travel of persons with such public health concerns is evidence that the protocols and procedures in place at the time were not comprehensive or effective. HHS and DHS also provided technical comments. We have amended our report to incorporate these clarifications where appropriate. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies to the Secretary of Health and Human Services and the Secretary of Homeland Security. Additional copies will be sent to other interested congressional committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Cynthia A. Bascetta at (202) 512-7114 or [email protected], or Eileen R. Larence at (202) 512-6510 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix IV. U.S. Customs and Border Protection (CBP), a component agency of the Department of Homeland Security (DHS), is the agency in charge of inspecting individuals seeking to enter the United States at air, land, and sea ports of entry. Each day, over 1 million individuals, both non-U.S. citizens and U.S. citizens, seek entry into the United States. In addition to determining whether these individuals are eligible to enter the country, CBP officers perform a wide range of law enforcement duties, such as screening cargo for weapons or illegal goods, preventing narcotics and agricultural pests from entering the country, and identifying and arresting persons with criminal warrants. Nearly 75 percent of all border crossings are at land ports of entry, and nearly 95 percent are at air or land ports. (See fig. 5.) According to CBP officials, the inspection of individuals arriving at air and land ports of entry is described as a layered process designed to ensure management, control, and security of U.S. borders while facilitating the flow of millions of legitimate individuals and goods into the United States. Officers are trained in customs and immigration law, law enforcement techniques, and agricultural requirements and must be able to carefully observe individuals, while using available tools, equipment, and support, in order to make sound decisions on whether to admit, detain, or deny entry to a traveler. CBP policies and procedures for inspecting individuals at all ports of entry require officers to determine the nationality of individuals and their admissibility, that is, whether they are eligible to enter the country. Because most individuals attempting to enter the country through ports of entry have a legal basis for doing so, a streamlined screening procedure referred to as primary inspection is used to process those individuals who can readily be identified as admissible. Persons whose admissibility cannot be readily determined may be subjected to a more detailed review called secondary inspection. This involves a closer inspection of travel documents and possessions, additional questioning by CBP officers, and cross-references through multiple law enforcement databases, including the Treasury Enforcement Communications System (TECS), to verify the traveler’s identity, background, and purpose for entering the country, and to detect any violations or risks to the public. In secondary inspection, an officer makes the final determination to admit the traveler, deny admission, or take other actions (such as releasing the traveler to another law enforcement entity for prosecution) based upon the results of the inspection. When possible, CBP officers also rely on canine and antiterrorism task force teams to conduct discretionary inspections of travelers throughout the inspection process. Although the procedures for inspecting individuals are generally the same at air and land ports of entry, there are differences that are due to variations in the ports’ operational environments. The procedures for inspecting individuals at air ports of entry differ from those at land ports of entry because commercial airlines are required to electronically transmit passenger manifest information to CBP through the Advanced Passenger Information System prior to the departure of international flights either from the United States or from other countries that are bound for the United States. This advance manifest information allows CBP time to conduct prescreening by querying a variety of law enforcement databases, including TECS and other types of alerts, to detect lookout records and warnings for various violations before individuals enter the country. Upon arrival in the United States at an air port of entry, however, individuals undergo the same general process in primary and secondary inspection as they do at land ports of entry. During primary inspection, individuals arriving by air must present documentation of citizenship and admissibility, such as a U.S. passport, permanent resident card, or foreign passport containing a visa issued by the Department of State. CBP officers must take physical possession of identification and match the photo with the individual, request declaration of residence, obtain an oral declaration concerning length of stay, ascertain purpose or intent of travel, and obtain a binding written customs declaration. However, unlike procedures at land ports of entry, CBP officers perform TECS queries during primary inspection on all individuals to identify potential matches to lookouts and warnings that were detected through the prescreening process. When an officer determines through primary inspection that additional questioning or inspection is required, individuals are referred to secondary inspection along with individuals who are matched to a TECS alert or warning as detected through the prescreening process. CBP officers face a greater challenge to identify and screen individuals at land ports of entry, in part because of the lack of advance traveler information and the high volume of travelers who can arrive by vehicle or on foot at virtually any time. Given these challenges, CBP officers rely heavily on observation and interview skills to be able to quickly detect suspicious activity or potential violations that may render a person inadmissible. During primary inspection, CBP officers are directed to conduct inspections on all travelers. As part of that inspection process, CBP officers are to perform TECS queries on as many travelers as feasible. All vehicles are queried in TECS using license plate readers installed in primary inspection vehicle lanes. For pedestrian lanes, the traveler’s name can be machine read from the travel document or manually keyed into TECS by the CBP officer. For vehicles, CBP officers frequently inspect multiple travelers entering in a single vehicle, and TECS queries are generally conducted on the individuals and the vehicle data. In addition, CBP officers visually examine the vehicle and inspect car passengers, verify license plate information, and monitor for the presence of radioactive material, among other tasks. For vehicles, CBP officers frequently inspect multiple travelers entering in a single vehicle, and the TECS queries are generally conducted on the individuals and on the vehicle. If necessary, CBP officers are to refer the travelers and their vehicle for secondary inspection. In addition to screening millions of travelers during primary and secondary inspection, CBP officers are responsible for observing all travelers for obvious signs and symptoms of quarantinable and communicable diseases, such as (1) fever, which could be detected by a flushed complexion, shivering, or profuse sweating; (2) jaundice (unusual yellowing of skin and eyes); (3) respiratory problems, such as severe cough or difficulty breathing; (4) bleeding from the eyes, nose, gums, or ears or from wounds; and (5) unexplained weakness or paralysis. However, CBP officials emphasized that CBP officers are not medically trained or qualified to physically examine or diagnose illness among arriving travelers. There are three general scenarios in which CBP officers encounter ill persons who are in need of medical attention or who may pose a public health threat: In the most common scenario, CBP officers encounter an individual who discloses that he/she needs medical attention for various health reasons. CBP officers suspect an individual may need medical attention or may pose a public health risk to others (e.g., individual exhibits obvious signs and symptoms of illness, such as fever, weakness, or both, as observed by officers). CBP officers encounter an individual who is an exact match to a public health alert in TECS and may pose a public health risk to others. In all three scenarios, CBP protocols require officials, at a minimum, to isolate the person while notifying officials at CDC and, depending on the circumstance, to contact the designated local public health authorities (e.g., hospitals and emergency medical personnel). Each port of entry is supplied with personal protective equipment, including masks and gloves, and inspecting officers must use this equipment in dealing with travelers suspected of having communicable or quarantinable illnesses, as well as while handling the individuals’ documents and belongings. CBP officers are responsible for coordinating with CDC to provide assistance in identifying arriving individuals from areas with known communicable disease outbreaks. In addition to the contacts named above, Karen Doran, Assistant Director; John Mortin, Assistant Director; George Bogart; Frances Cook; Katherine Davis; Shana Deitch; Jennifer DeYoung; Raymond Griffith; Catherine Kim; Maren McAvoy; Carolina Morgan; Roseanne Price; Janay Sam; Jessica Smith; and Ellen Wolfe made significant contributions to this report. Border Security: Despite Progress, Weaknesses in Traveler Inspections Exist at Our Nation’s Ports of Entry. GAO-08-329T. Washington, D.C.: January 3, 2008. Global Health: U.S. Agencies Support Programs to Build Overseas Capacity for Infectious Disease Surveillance. GAO-07-1186. Washington, D.C.: September 28, 2007. Border Security: Security Vulnerabilities at Unmanned and Unmonitored U.S. Border Locations. GAO-07-884T. Washington, D.C.: September 27, 2007. Influenza Pandemic: Further Efforts Are Needed to Ensure Clearer Federal Leadership Roles and an Effective National Strategy. GAO-07-781. Washington, D.C.: August 14, 2007. Global Health: Global Fund to Fight AIDS, TB and Malaria Has Improved Its Documentation of Funding Decisions but Needs Standardized Oversight Expectations and Assessments. GAO-07-627. Washington, D.C.: May 7, 2007. Border Security: Continued Weaknesses in Screening Entrants into the United States. GAO-06-976T. Washington, D.C.: August 2, 2006. Emergency Preparedness: Some Issues and Challenges Associated with Major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. Emerging Infectious Diseases: Review of State and Federal Disease Surveillance Efforts. GAO-04-877. Washington, D.C.: September 30, 2004. Global Health: Challenges in Improving Infectious Disease Surveillance Systems. GAO-01-722. Washington, D.C.: August 31, 2001. Public Health: Trends in Tuberculosis in the United States. GAO-01-82. Washington, D.C.: October 31, 2000. Managing for Results: Barriers to Interagency Coordination. GAO/GGD-00-106. Washington, D.C.: March 29, 2000. Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1. Washington, D.C.: November 1999.
|
In spring 2007, the Department of Health and Human Services (HHS), the Department of Homeland Security (DHS), and state and local health officials worked together to interdict two individuals with drug-resistant infectious tuberculosis (TB) from crossing U.S. borders and direct them to treatment. Concerns arose that HHS's and DHS's responses to the incidents were delayed and ineffective. GAO was asked to examine (1) the factors that affected HHS's and DHS's responses to the incidents, (2) the extent to which HHS and DHS made changes to response procedures as a result of the incidents, and (3) HHS's and DHS's efforts to assess the effectiveness of changes made as a result of the incidents. GAO reviewed agency documents and interviewed officials about the procedures in place at the time of the incidents and changes made since. Various factors--a lack of comprehensive procedures for information sharing and coordination and border inspection shortfalls--hindered the federal response to the two TB incidents. GAO's past work and federal internal control standards call for collaborative communication and coordination across agencies; communication flowing down, across, and up agencies to help managers carry out their internal control responsibilities; and effective leadership, capabilities, and accountability to ensure effective preparedness and response to hazardous situations. HHS and DHS finalized a memorandum of understanding in October 2005 intended to promote communication and coordination in response to public health incidents, but they had not fully developed operational procedures to share information and coordinate their efforts. Thus, HHS and DHS lost time locating or identifying the individuals to interdict them at the U.S. border. Also, HHS lacked procedures to coordinate with state and local health officials to determine when to use federal isolation and quarantine authorities, which further contributed to the delay in the federal response to one of the incidents. Finally, DHS had deficiencies in its process for inspecting individuals at the border, which caused delays in locating the individuals with TB. HHS and DHS have subsequently implemented procedures and tools intended to address deficiencies identified by the incidents, consistent with GAO's past work and internal control standards, but the departments could take additional steps to enhance their ability to respond to future TB incidents. Since the 2007 incidents, HHS and DHS have developed formal procedures for HHS to request DHS's assistance, and DHS has (1) developed a watch list for airlines to identify individuals with TB and other infectious diseases who are to be stopped from traveling and (2) revised its border inspection process to include a requirement that individuals with TB identified by HHS be subject to further inspection. DHS has also enhanced its process for creating public health alerts based on some variations of biographic information (e.g., name, date of birth, or travel document information), but has not explored the benefits of creating these alerts based on other variations, which impeded DHS's ability to interdict one of the individuals at the border. In addition, HHS has not yet completed efforts to provide information on changes in procedures to state and local health officials, who typically originate requests for assistance, to help mitigate delays in accessing federal assistance. HHS and DHS identified additional actions that need to be taken to further strengthen their response, but have not developed plans for completing them. HHS and DHS have activities under way to assess the effectiveness of the new procedures and tools, including performance monitoring and cross-agency meetings to discuss and revise the new procedures and tools based on actual experiences. HHS and DHS have coordinated on more than 70 requests for assistance since the 2007 incidents through February 2008; officials said they view each incident as a test of the efficacy of their responses.
|
State receives correspondence from members of Congress that includes requests for specific information, requests for documents, and requests that the Secretary of State direct his or her attention to various matters. More than half of the correspondence that State received and responded to from April 2011 through June 2013 involved what State refers to as constituent concerns. According to State officials, this correspondence covered a variety of topics of interest to members’ constituents, ranging from the status of visa applications to employment possibilities with State. We hereafter refer to such correspondence as constituent-related correspondence. The remaining correspondence involved what State refers to as substantive concerns. State officials said this correspondence sought information concerning foreign policy matters, such as human rights in Vietnam; the attack on the U.S. Consulate in Benghazi, Libya; arms sales to foreign nations; State management issues; and bilateral relations with other governments. We hereafter refer to such correspondence as substantive correspondence. State uses a multistage process to respond to both constituent-related and substantive correspondence. Congressional Correspondence Unit officials said that they began using the congressional correspondence database in April 2011 to help manage the process of drafting, reviewing, and mailing State’s responses to congressional correspondence. Specifically, they said that they use the database to track the status of State’s response letters as they move through the following stages of State’s process: Congressional Correspondence Unit initiates case in database: When State receives a piece of congressional correspondence, the Congressional Correspondence Unit scans a copy of the correspondence and records information about it—such as the date received, the member’s name, and subject—into the database. The Congressional Correspondence Unit then prepares a tasking slip, tasks the State bureau or office with the appropriate subject matter expertise to draft the response letter, and establishes an interim deadline for that bureau or office to prepare the draft response letter (2 days for substantive correspondence, 7 days for constituent-related correspondence). It also inputs the tasking information into the database. When it has done so, the Congressional Correspondence Unit transmits the tasking slip and congressional correspondence to a point of contact in the tasked bureau or office. Tasked bureau or office drafts response letter: The tasked bureau or office drafts the response letter and obtains appropriate clearances within the department and other agencies as applicable by the designated interim deadline. The tasked bureau or office may request an extension from the Congressional Correspondence Unit if the interim deadline cannot be met. In addition, if the tasked bureau or office foresees a prolonged delay, it may provide the member with an interim acknowledgment notifying him or her of the reason for the delay. Based on the member’s request, in cases involving constituent- related correspondence, the Congressional Correspondence Unit may delegate to the tasked bureau or office the responsibility of drafting, signing, and mailing the response letter directly to the constituent and the member. State policy requires bureaus and offices to notify the Bureau of Legislative Affairs before they mail such response letters. In such cases, the process ends at this stage and the Congressional Correspondence Unit closes the case in the database. For all other cases, the process includes the following stages. Bureau of Legislative Affairs reviews draft response letter: The tasked bureau or office transmits its draft response letter to the Congressional Correspondence Unit, which conducts an initial review of the draft response letter. During this stage, other officials in the Bureau of Legislative Affairs conduct their own review of the draft response letter and may edit it as needed. In some cases, draft response letters to constituent-related correspondence may not require any further review and the Congressional Correspondence Unit may move the draft response letter to the final stage of the process, where the unit, among other things, mails the letter and closes the case in the database. Bureau of Legislative Affairs senior officials review draft response letter: Senior officials in the Bureau of Legislative Affairs may review, edit, and add additional information to the draft response letter. These officials then transmit the draft response letter to the Assistant Secretary for Legislative Affairs. Assistant Secretary reviews and signs draft response letter: The Assistant Secretary for Legislative Affairs conducts a final review of the draft response letter, makes edits as needed, and signs and transmits it to the Congressional Correspondence Unit. According to Congressional Correspondence Unit officials, almost all responses to substantive correspondence are reviewed and signed by the Assistant Secretary. Congressional Correspondence Unit closes case in database: The Congressional Correspondence Unit reviews the signed response letter, scans it into the database, indicates that the case is closed in the database, prepares the response letter for mailing, and notifies the tasked bureau or office that it has mailed the response letter to the member. Congressional Correspondence Unit officials told us they use information from the database to generate (1) a weekly status report sent to each tasked bureau or office that identifies overdue response letters, and (2) a separate weekly status report that identifies all overdue response letters across the Department of State for Bureau of Legislative Affairs officials— including the Bureau’s Executive Director, Principal Deputy Assistant Secretary, and Assistant Secretary. Bureau of Legislative Affairs officials said that they use their report to identify actions that they can take to ensure that overdue responses are completed. In addition, the database system sends automated e-mails to bureaus and offices when their draft responses are overdue. We found that State did not track key information about the timeliness of nearly half of its responses to congressional correspondence. Under State policy, if State cannot provide a response to congressional correspondence within 21 business days of receiving such correspondence, State must provide an interim acknowledgment informing the member of the reason for the delay. We reviewed data concerning 4,804 pieces of correspondence and identified 2,524 (53 percent) cases in which State tracked the time it took to respond and also met its timeliness goal of responding to congressional correspondence within 21 days. However, we found that the Bureau of Legislative Affairs did not track the time State took to respond to 1,544 (32 percent) of the 4,804 cases that we reviewed because the Bureau of Consular Affairs— which was tasked with drafting and mailing these responses directly to constituents and members—did not notify the Bureau of Legislative Affairs when it did so, as required by State policy. We also found that the Bureau of Legislative Affairs did not systematically track if and when State sent interim acknowledgments to members in cases that took more than 21 days. Therefore, we could not determine whether State had actually sent such acknowledgments in the 736 cases where the response time exceeded 21 days, which constituted 15 percent of the 4,804 cases we reviewed. State tracked and met its timeliness goal for more than half of its responses to congressional correspondence. According to State policy, within 21 business days of receiving congressional correspondence, State must provide the member with either (1) a response letter or (2) an interim acknowledgment informing the member that State’s response will take more than 21 days and explaining why. We reviewed data concerning the 4,804 pieces of correspondence that State’s database indicated had been received and responded to between April 2011 and June 2013. We identified 2,524 (53 percent) in which State met its timeliness goal of responding to congressional correspondence within 21 days. Specifically, we found that State took 10 days or less to respond in 1,336 cases and between 11 and 21 days in 1,188 of those cases. For these cases, the Bureau of Legislative Affairs tracked the response letters through the multistage process and told us that its staff closed the case in the database when the response letter was ready to be mailed to the member. We found that State did not track if and when the Bureau of Consular Affairs replied directly to constituents and sent copies of the replies to members of Congress because the Bureau of Consular Affairs did not notify the Bureau of Legislative Affairs when sending those response letters, as required by State policy. From April 2011 through June 2013, the Bureau of Consular Affairs was the bureau that the Congressional Correspondence Unit tasked with drafting the highest number of State’s response letters. During that period, the Congressional Correspondence Unit tasked the Bureau of Consular Affairs with replying directly to constituents and sending copies to members in 1,544 cases, which constituted 32 percent of the 4,804 pieces of congressional correspondence we reviewed. For those cases, therefore, the Bureau of Consular Affairs was responsible for drafting, reviewing, signing, and mailing the response letters directly to constituents and members, and was also required by State policy to notify the Bureau of Legislative However, the Bureau of Consular Affairs did not Affairs before doing so.notify the Bureau of Legislative Affairs if and when it mailed the response letters. In addition, Congressional Correspondence Unit officials told us that they did not follow up with the Bureau of Consular Affairs to confirm if and when the Bureau of Consular Affairs replied directly to constituents and members. As a result, the Congressional Correspondence Unit’s database contains incomplete data regarding those response letters. We also found that the Bureau of Consular Affairs itself does not have a centralized process by which it tracks if and when it responds to such correspondence. Bureau of Consular Affairs officials stated each of the bureau’s individual directorates—which are tasked with drafting, reviewing, signing, and mailing the bureau’s response letters directly to constituents and members—has its own mechanism to track the status of its response letters. Further, these directorates did not notify the Bureau of Legislative Affairs when they mailed the response letters. In addition, Congressional Correspondence Unit officials told us that they did not follow up with the directorates to confirm if and when they replied directly to constituents and members. Congressional Correspondence Unit officials told us that the Bureau of Consular Affairs does not notify the Bureau of Legislative Affairs before sending such response letters because of a longstanding memorandum of understanding between the bureaus. State officials said that this memorandum effectively exempts the Bureau of Consular Affairs from State’s policy of notifying the Bureau of Legislative Affairs prior to sending response letters. We were unable to confirm the existence of the memorandum because officials from the Bureau of Legislative Affairs, the Bureau of Consular Affairs, and the Office of the Secretary told us that they were unable to locate it. State officials explained that the congressional correspondence for which the Bureau of Consular Affairs is tasked to respond directly to constituents and members covers routine matters, such as inquiries about the status of passport or visa applications, and that the response letters do not require further review by the Congressional Correspondence Unit or Bureau of Legislative Affairs. Lacking complete data from the Bureau of Consular Affairs regarding if and when it replied directly to constituents and members, Congressional Correspondence Unit officials said that they instead closed those cases by recording the date that they tasked the Bureau of Consular Affairs with drafting the response as the date that the signed response letter was sent. As a result, the database does not contain accurate data regarding if and when the Bureau of Consular Affairs actually responded to constituents and members. Congressional Correspondence Unit officials said that, on two occasions, they prepared reports on response times for senior Bureau of Legislative Affairs officials that incorporated these inaccurate data and therefore inaccurately reported State’s response times for congressional correspondence. State did not systematically track if and when it provided interim acknowledgments in cases for which it took more than 21 business days to prepare a response letter. Of the 4,804 cases we reviewed, we identified 736 cases (15 percent) in which State took more than 21 business days to mail a response letter to the member. Specifically, we found that State took between 30 and 59 days to respond in 347 of those cases and over 60 days to respond in 122 of those cases. We then attempted to determine whether State sent members interim acknowledgments for the cases that took more than 21 business days. We found that State did not systematically track if and when it sent members interim acknowledgments in these cases. Congressional Correspondence Unit officials told us they did not routinely gather information regarding interim acknowledgments and did not include a specific field in the database for such information. According to GAO’s Internal Control Management and Evaluation Tool, agencies should ensure accuracy and completeness of information. Because State’s database lacks accurate data for almost a third of the cases we reviewed and does not have complete data on interim acknowledgments for the 15 percent of cases where the response time exceeded 21 business days, State cannot readily determine the extent to which it is meeting its timeliness goal for these cases (see fig. 1). Furthermore, without accurate and complete data, State is not in a position to identify elements of the process that may be most prone to delays and develop strategies to improve the timeliness of its response letters. See GAO, Internal Control Standards: Internal Control Management and Evaluation Tool, GAO-01-1008G (Washington, D.C.: August 2001). Congressional correspondence sent to State is an important means by which members of Congress may obtain information and exercise oversight over the department. In its policies, State has acknowledged the importance of providing timely responses to congressional correspondence. In 2011, State took an important step toward ensuring that it is doing so by employing a database to help track and manage the process of drafting and mailing its response letters. However, State has undermined its ability to track the timeliness of its responses by (1) not tracking if and when the Bureau of Consular Affairs directly replied to constituents and members and (2) not tracking if and when State officials provided interim acknowledgments in response to congressional correspondence. Without accurate and complete data, State is not in a position to identify elements of the process that may be most prone to delays and develop strategies to improve the timeliness of its response letters. To improve State’s ability to provide timely responses to congressional correspondence, we recommend that the Secretary of State take the following two actions: Take appropriate steps to ensure that State tracks all response letters, including those tasked to the Bureau of Consular Affairs to reply directly to constituents and members. Ensure that State tracks if and when it provides interim acknowledgments to members of Congress. We provided a draft of this report to State for comment. In its written comments, reproduced in appendix II, State agreed with our recommendations and said it would begin to implement them immediately. We are sending copies of this report to the Department of State and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8980 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. For this report, we examined (1) State’s process for responding to congressional correspondence and (2) the extent to which State tracks the timeliness of its responses to congressional correspondence. To address our first objective, we examined documents describing State’s policies and procedures, including State’s Foreign Affairs Manual, Foreign Affairs Handbook, and memoranda circulated in the department. We also interviewed cognizant State officials in Washington, D.C., including officials in the Congressional Correspondence Unit, as well as officials in bureaus that draft responses to congressional correspondence. To address our second objective, we reviewed GAO’s Internal Control and an extract of the database that Management and Evaluation Toolthe Congressional Correspondence Unit uses to track and manage State’s responses to congressional correspondence. The scope of our engagement included records of congressional correspondence that the database indicated had been received and responded to by State from April 2011—when, according to State officials, they began fully using the database—through June 2013. In August 2013 we requested an extract of the entire database that contained specific information from several fields in the database. In March 2014, State provided us with an extract containing information on 6,942 pieces of correspondence, including the following fields for each piece of correspondence: Control Number, Document Type (Substantive or Constituent), Classification, Current Stage, Type of Reply, Date Received, Date on Letter, Date Processed, Date Due, Multi-Signer Letter (Yes or No), Bureau Assigned, Member, Comments field, entry and exit date for each stage the correspondence passed through, and the date the task was closed. We assessed the reliability of the data by interviewing cognizant officials in Washington, D.C., observing a demonstration on using the database, reviewing the data we were given, and performing logic tests on it. We determined that the data were sufficiently reliable for our purposes. We analyzed the database extract containing information on 6,942 pieces of correspondence. Because the scope of our engagement included information on all congressional correspondence that the database indicated had been received and responded to by State from April 2011 through June 2013, we deleted records on correspondence that met the following criteria: were received outside of the stated time frame (April 1, 2011, to June 30, 2013); were marked as “For Your Information Only” in the database, because these pieces of correspondence did not require a response from State; and did not have a “task closed” date as of the end of June 2013. As a result of these deletions, we examined information on 4,804 pieces of correspondence for our analysis. While reviewing the database records, we found a portion of the constituent-related correspondence was not marked in the database as passing through the Bureau of Legislative Affairs prior to the task being closed and we were told by State that such correspondence is tasked to bureaus to respond directly to the constituent with a copy of the response letter sent directly to the member. State officials said that the “task closed” dates in the database were inaccurate for constituent-related correspondence tasked to the Bureau of Consular Affairs for direct reply to the constituents and members. Specifically for those cases, State officials said the “task closed” date reflected the date that the Congressional Correspondence Unit tasked the Bureau of Consular Affairs with responding to the correspondence, rather than the date the response letter was mailed and the case was closed. We found that there were 1,544 such cases in the database. To assess the timeliness of State’s responses, we used State’s Foreign Affairs Handbook’s timeliness goal of 21 business days and analyzed data on 4,804 cases. We defined business days as every official working day of the week for the U.S. federal government (we excluded weekend days and federal holidays). For constituent-related correspondence, we could not calculate timeliness for the 1,544 cases discussed above because the “task closed” dates in the database were inaccurate. For the remaining constituent-related correspondence, we used “date processed” as the start date and the “task closed” date as the end date because Congressional Correspondence Unit officials told us that the Bureau of Archives processes and archives these letters prior to the Congressional Correspondence Unit uploading and designating letters to bureaus to draft responses. For substantive correspondence, we used the “date received” as the start date and the “task closed” date as the end date. In addition to the contact named above, Pierre Toureille (Assistant Director), Ashley Alley, Debbie Chung, Martin De Alteriis, Leah DeWolf, Tim DiNapoli, Etana Finkler, Rhonda Horried, Jeff Isaacs, Mark Needham, Jerry Sandau, Sushmita Srikanth, and Michelle Wong made key contributions to this report.
|
State receives about 2,200 pieces of correspondence each year from members of Congress seeking information. GAO was asked to review State's procedures for responding to requests for information. GAO examined (1) State's process for responding to congressional correspondence and (2) the extent to which State tracks the timeliness of its responses to congressional correspondence. To do so, GAO reviewed information on 4,804 pieces of correspondence that State indicated it had received and responded to between April 2011—when State said it began using a database to track its response letters—and June 2013. GAO also interviewed cognizant State officials. The Department of State (State) uses a multistage process to respond to congressional correspondence. In April 2011, the Bureau of Legislative Affairs (the Bureau), which is responsible for tracking State's response letters, began using a database to track State's responses as they move through the stages of the process. The process includes the Bureau entering key information into a database, tasking other State bureaus or offices with subject matter expertise to draft response letters, and conducting reviews of draft response letters prior to mailing them. In some cases, the Bureau tasks other bureaus with drafting, reviewing, and mailing the letters themselves. State did not track key information on the timeliness of nearly half of its responses to congressional correspondence. State's timeliness goal is to provide the member, within 21 business days of receiving his or her correspondence, with either a response letter or an interim acknowledgment informing the member of the delay. State tracked the time it took to respond and also met its timeliness goal in 2,524 (53 percent) of the 4,804 cases that GAO reviewed. However, State did not track the timeliness of its responses in 1,544 (32 percent) of the cases GAO reviewed because the bureau tasked with mailing the response directly to constituents and members did not notify the Bureau when it did so, as required by State policy. In those cases, the Bureau recorded the date it tasked the other bureau as the date State sent its response letter, although it had no information as to if or when this actually occurred. In addition, because the Bureau did not systematically track State's interim acknowledgments in cases that took more than 21 days, GAO could not determine whether State actually sent such acknowledgments in 736 (15 percent) of the cases GAO reviewed where the response time exceeded 21 days (see figure). Because its database lacks accurate and complete data, State is not in a position to identify elements of the process that may be most prone to delays and therefore cannot develop strategies to improve the timeliness of its response letters. GAO recommends that State (1) take steps to ensure that all response letters, including those tasked to bureaus to reply directly to constituents and members, are tracked; and (2) ensure that if and when interim acknowledgments to members of Congress are provided, they are tracked. State agreed with GAO's recommendations and said it would begin to implement them immediately.
|
During the first 2 years of its reform efforts, DCPS implemented several classroom-based initiatives to improve students’ basic skills in core subjects. For example, to improve students’ basic skills and standardized test scores in reading and math, DCPS introduced targeted interventions for students struggling in these subjects and provided additional instruction and practice to improve students’ responses to open-ended questions, including test questions. Table 1 provides a list of DCPS’s major initiatives to improve student outcomes, as well as descriptions and the status of these initiatives. DCPS is modifying its approach to implementing many of these initiatives as it moves forward. For example, the Chancellor recently acknowledged that DCPS, in its effort to remedy the range of issues that plagued the District’s public schools, may have launched too many initiatives at once and some schools may not have had the capacity to implement so many programs effectively. In particular, some schools were undergoing significant organizational changes that may have affected their ability to implement these new academic initiatives. To support such schools, DCPS is considering offering a choice of programs for schools and allowing the principals to determine which programs best suit their schools’ needs and capacity. DCPS does not yet know how successful these initiatives have been in improving student achievement. Our report notes that DCPS elementary and secondary students increased their reading and math scores between 8 and 11 percentage points on the 2008 state-wide test, but it is unclear whether these gains could be attributed to the current reform efforts or to prior efforts. Preliminary scores for the 2009 reading and math tests were announced on July 13, 2009. Elementary students made modest gains in reading (49 percent were proficient in reading, up from 46 percent in 2008) and more substantial gains in math (49 percent proficient in math, up from 40 percent in 2008). Preliminary scores for secondary students show that 41 percent are proficient in reading, up from 39 percent in 2008, and 40 percent are proficient in math, up from 36 percent in 2008. While DCPS officials told us that it is generally difficult to isolate and quantify the impact of any single program on student achievement, they plan in late summer 2009 to analyze student outcomes, including state-wide test scores, to assess the effectiveness of various initiatives. DCPS officials also noted that there were varying levels of teacher quality and knowledge of effective teaching practices, and that it was difficult to ensure the extent to which teachers implemented the programs effectively. While DCPS had not previously defined “effective” teaching, DCPS officials told us they will focus on practicing effective teaching, as opposed to implementing various disparate programs. By the beginning of the 2009-2010 school year, DCPS plans to implement a framework that is intended to help teachers understand what students are expected to learn for each subject, how to prepare lessons, and what effective teaching methods are to be used. DCPS also changed the way it allocated teachers across its schools for the 2008-2009 school year. This new staffing model was intended to provide all schools with a core of teachers including art, music, and physical education, as well as social workers. It was also intended to provide all schools with reading coaches who work with teachers to improve reading instruction. Prior to this change, DCPS allocated funding to schools using a weighted student formula, which distributed funds to schools on a per pupil basis, so that the greater the enrollment of a school, the greater the amount allocated to that school. The new staffing model was intended to ensure core staff at all schools regardless of enrollment. While DCPS allowed principals to request changes to the staffing model based on their school’s needs, it did not establish or communicate clear guidance or criteria on how such requests would be treated. Therefore, it is unclear whether similar requests were treated in a consistent manner. A more transparent process, one that publicly shared their rationale for such decisions, would have helped assure stakeholders, including the D.C. Council, that changes to staffing allocations were made consistently and fairly. The D.C. Council and several community groups have criticized the process for its lack of transparency and questioned the fairness of the decisions made. For example, one independent analysis concluded that under the staffing model some schools received less per pupil funding than others with similar student populations. DCPS revamped its approach for the staffing model for the 2009-2010 school year to address some of these challenges. For example, it established guidance about what changes it will allow principals to make to the staffing model and disseminated this guidance to school leaders at the beginning of the budgeting process. According to DCPS, the new guidance is expected to reduce the number of changes that principals request later in the process. In addition, as required by NCLBA, DCPS restructured 22 schools before the fall of 2008, after the schools failed to meet academic targets for 6 consecutive years. NCLBA specifies five options for restructuring a school, including replacing selected staff or contracting with another organization or company to run the school. DCPS revamped its process for determining the most appropriate restructuring option for the 13 schools that will be restructured in the 2009-2010 school year. Prior to implementing the first round of restructuring (for the 2008-2009 school year), DCPS officials told us there were insufficient school visits and inadequate training and guidance for teams assigned to evaluate which restructuring option was best suited for a given school. DCPS has addressed these issues by requiring two visits to each school, offering more training, and revising the form used to evaluate each school’s condition for the next round of restructuring. Restructuring underperforming schools will likely be an ongoing initiative for DCPS, as 89 of its 118 schools were in some form of school improvement status as of June 2009. Finally, DCPS and the state superintendent’s office are planning and developing new ways to use data to monitor student achievement and school performance. DCPS reported it has ongoing and planned initiatives to expand data access to principals and teachers, in part to monitor student and school performance. In particular, DCPS reported making improvements to its primary student data system so central office users can better monitor school performance. DCPS also plans to use monthly reports to enable school leaders to better monitor student progress, but DCPS officials told us they have delayed some of these efforts while they attempt to improve coordination among the various departments that were developing and disseminating information to school leaders. The state superintendent’s office also is developing a longitudinal database, called the Statewide Longitudinal Education Data Warehouse (SLED), intended to allow DCPS and other stakeholders to access a broad array of information, including standardized test scores of students and information on teachers. According to officials in the state superintendent’s office, they revised the project schedule to allow more time to assist the charter schools with updating their data systems. In February 2009, the initial release of student data provided a student identification number and information on student eligibility for free or reduced-price lunches and other student demographics for all students attending DCPS’s schools and the public charter schools. The state superintendent’s office plans for SLED to enable DCPS to link student and teacher data by February 2010. DCPS focused on a workforce replacement strategy to strengthen teacher and principal quality. After the 2007-2008 school year, about one-fifth of the teachers and one-third of the principals resigned, retired, or were terminated from DCPS. DCPS terminated about 350 teachers and an additional 400 teachers accepted financial incentives offered by DCPS to resign or retire in the spring of 2008. In addition, DCPS did not renew the contracts of 42 principals. To replace the teachers and principals who left the system, DCPS launched a nationwide recruitment effort for the 2008- 2009 school year and hired 566 teachers and 46 principals for the 2008- 2009 school year. DCPS did not have a new teacher contract in place due to ongoing negotiations with the Washington Teachers’ Union and DCPS officials told us a lack of contract may have hindered their efforts to attract top-quality teachers. Under the plan, which has been in negotiation with the Washington Teachers’ Union since November 2007, the Chancellor has stated that she wants to recruit and retain quality teachers by offering merit pay, which would reward teachers with higher salaries based, in part, on their students’ scores on standardized state tests. In addition, DCPS officials told us that the 2007-2008 and 2008-2009 teacher evaluation process did not allow them to assess whether the teacher workforce improved between these 2 school years. According to DCPS officials, this system does not measure teachers’ impact on student achievement—a key factor cited by DCPS officials in evaluating teacher effectiveness. DCPS plans to revise its teacher evaluation process to more directly link teacher performance to student achievement. To supplement school administrators’ observations of teachers, DCPS is also seeking to add classroom observations by 36 third-party observers, called master teachers, who would be knowledgeable about teaching the relevant subject matter and grade level. In addition, DCPS introduced professional development initiatives for teachers and principals, but late decisions about the program for teachers led to inconsistent implementation. For the 2008-2009 school year, DCPS hired about 150 teacher coaches to improve teachers’ skills in delivering reading and math instruction and boost student test scores. According to DCPS, teacher coaches assisted teachers with interpreting student test scores, planning lessons, and using their classroom time constructively. DCPS is planning for teacher coaches to work with teachers in all grades and subjects for the 2009-2010 school year. DCPS intended to staff about 170 teacher coaching positions; however, as DCPS began the 2008-2009 school year, about 20 percent of the coaching positions remained open (19 reading coach vacancies and 16 math coach vacancies) because of late hiring of teacher coaches. DCPS officials told us they made the decision to hire teacher coaches after their review of school restructuring plans in June 2008. The ratio of teachers to coaches was higher than it would have been had the positions been filled. In addition, according to DCPS officials and Washington Teachers’ Union officials we interviewed, teacher coaches were often uncertain about their responsibilities and how to work with teachers, and received some conflicting guidance from principals. The state superintendent’s office and DCPS each developed their 5-year strategic plans and involved stakeholders in the process. Stakeholder involvement in formulating strategic plans allows relevant stakeholders to share their views and concerns. The state superintendent’s office and the State Board of Education collaboratively developed the District’s state- level, 5-year strategic plan, and released it in October 2008. This state-level plan spans early childhood and kindergarten through grade 12 education (including public charter schools). Officials from the state superintendent’s office told us they involved District officials, and stakeholders representing early childhood education, business, and higher education communities, as well as other stakeholders while drafting the plan. In September 2008, the state superintendent’s office held a public forum to solicit stakeholder input and accepted comments on the draft on its Web site. The office released a revised version of the plan within a month of the public forum. DCPS released the draft of its 5-year strategic plan in late October 2008. In contrast to the state-level plan which includes the public charter schools, the DCPS plan is specific to prekindergarten through grade 12 education in its 128 schools. DCPS officials told us they based the draft on the Master Education Plan, which the prior DCPS administration developed with stakeholder involvement, and that they sought additional stakeholder input through a series of town hall meetings. After releasing the draft, DCPS held three public forums in the following 3 weeks where attendees provided DCPS officials with feedback on the draft strategic plan. In May 2009, DCPS released the revised draft, which incorporated stakeholder feedback. Officials from the D.C. Deputy Mayor of Education’s office told us that as part of their office’s coordinating role, it ensured that DCPS and the state-level strategic plans were aligned. However, the office had no documentation showing its efforts to coordinate these plans, such as an alignment study. We found that the two plans were aligned in terms of long-term goals. For example, DCPS’s goals could support the state-level goal of having all schools ready. However, we could not evaluate whether more detailed, objective measures and performance targets were aligned because the DCPS strategic plan did not always include specific objective measures and performance targets. DCPS recently increased its efforts to involve stakeholders in various initiatives; however, it has not always involved stakeholders in key decisions and initiatives. DCPS officials told us they have a variety of approaches to involve stakeholders, including parents, students, and community groups, as well as institutional stakeholders such as the D.C. Council. For example, DCPS officials told us they reach out to parents, students, and the public through monthly community forums, meeting with a group of high school student leaders and a parent advisory group, responding to e-mail, and conducting annual parent and student surveys to gauge the school system’s performance. DCPS also involved other stakeholders, such as parent organizations and the Washington Teachers’ Union in its process of changing the discipline policy. However, according to two DCPS officials, DCPS did not have a planning process in place to ensure systematic stakeholder involvement, and we found that DCPS implemented some key initiatives with limited stakeholder involvement. For example, key stakeholders, including D.C. Council members and parent groups, told us they were not given the opportunity to provide input on DCPS’s initial proposals regarding school closures and consolidations, the establishment of schools that spanned prekindergarten to grade 8, or the planning and early implementation of the new staffing model that placed art, music, and physical education teachers at schools and which fundamentally changed the way funding is allocated across DCPS. Lack of stakeholder involvement in such key decisions led stakeholders, including the D.C. Council and parents groups, to voice concerns that DCPS was not operating in a transparent manner or obtaining input from stakeholders with experience relevant to the District’s education system. Further, these stakeholders have questioned whether the impact of reform efforts will be compromised because of restricted stakeholder involvement. Stakeholders in the other urban school districts we visited told us a lack of stakeholder involvement leads to less transparency as key decisions are made without public knowledge or discourse. In addition, the lack of stakeholder involvement can result in an erosion of support for ongoing reform efforts and poor decisions. For example, officials in Chicago and Boston said public stakeholder involvement was critical to community support for various initiatives, such as decisions on which schools to close. Officials and stakeholders in New York cited a lack of stakeholder involvement in decisions that were eventually reversed or revised. DCPS has taken steps to improve accountability and performance of its central office. To improve accountability for central office departments, DCPS developed departmental scorecards to identify and assess performance expectations for each department. According to a DCPS official, these scorecards are discussed at weekly accountability meetings with the Chancellor to hold senior-level managers accountable for meeting performance expectations. In addition, in January 2008, DCPS implemented a new performance management system for employees. Performance management systems for employees are generally used to set individual expectations, assess and reward individual performance, and plan work. In addition, as we previously reported in our March 2008 testimony, DCPS developed individual performance evaluations as a part of its performance management system in order to assess central office employees’ performance. Previously, performance evaluations were not conducted for most DCPS staff. Individual performance evaluations are now used to assess central office employees on several core competencies twice a year. Prior to our March 2008 testimony, DCPS officials told us that they intended to align the performance management system with organizational goals by January 2009, and DCPS has taken some steps to improve alignment. For example, DCPS officials told us they had better aligned their departmental scorecards to their 2009 annual performance plan. However, DCPS has not yet explicitly linked employee performance evaluations to the agency’s overall goals. DCPS officials told us they plan to do so in the summer of 2009. The state superintendent’s office also implemented a new performance management system, effective October 2008, to hold its employees accountable and improve the office’s performance. The office is converting to a single electronic management system to track and evaluate employee performance by December 2009. According to an official from the state superintendent’s office, this system links individual employee evaluations to overall performance goals and the office’s strategic plan. Under this new evaluation system, each employee is given a position description, which includes responsibilities and duties linked to the overall goals, mission, and vision of the state superintendent’s office. Individual and agency expectations are defined in an annual performance meeting with the employee. The office is currently training supervisory employees on how to use the system before its full implementation in December 2009. In addition to implementing a performance management system, the State Superintendent has begun to address long-term deficiencies identified by Education related to federal grant management. Education designated the District as a high-risk grantee because of its poor management of federal grants. If the District continues to be designated as a high-risk grantee, Education could respond by taking several actions, such as discontinuing one or more federal grants made to the District or having a third party take control over the administration of federal grants. As noted in a recent GAO report, the state superintendent’s office uses findings from an annual audit as part of its risk assessment and monitoring of subrecipients. The findings are used to design monitoring programs and determine risk levels for each school district, and the risk levels are used to develop monitoring strategies and work plans. The state superintendent’s office developed a corrective action plan, which it reports to Education and intends to use the plan to strengthen the monitoring of the school districts. The District’s Mayor and his education team have taken bold steps to improve the learning environment of the District’s students. As more initiatives are developed, the need to balance the expediency of the reform efforts with measures to increase sustainability, such as stakeholder involvement, is critical. DCPS currently lacks certain planning processes, such as communicating information to stakeholders in a timely manner and incorporating stakeholder feedback at key junctures, which would allow for a more transparent process. Stakeholder consultation in planning and implementation efforts can help create a basic understanding of the competing demands that confront most agencies and the limited resources available to them. Continuing to operate without a more formal mechanism for stakeholder involvement could diminish support for the reform efforts, undermine their sustainability, and ultimately compromise the potential gains in student achievement. In addition, since the Reform Act, the District has taken several steps to improve central office operations, such as providing more accountability at the departmental level and implementing a new individual performance management system. However, DCPS has not yet aligned its performance management system, including its individual performance evaluations, to its organizational goals, which could result in a disparity between employees’ daily activities and services needed to support schools. By ensuring that employees are familiar with the organizational goals and that their daily activities reflect these goals, DCPS could improve central office accountability and support to schools. In our report that we publicly released today, we make two recommendations that could improve the implementation and sustainability of key initiatives in the District’s transformation of its public school system. We recommend that the Mayor direct DCPS to: Establish planning processes that include mechanisms to evaluate its internal capacity and communicate information to stakeholders and, when appropriate, incorporate their views. Link individual performance evaluations to the agency’s overall goals. In written comments on the report, all three District education offices— DCPS, the state superintendent’s office and the Deputy Mayor for Education—concurred with our recommendations. However, they expressed concern with the way in which we evaluated their reform efforts and the overall tone of the draft report. A summary of the District’s response to our findings and recommendations, as well as our evaluation of the response, are contained on pages 41 and 42 of the report. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions that you may have at this time. For further information regarding this testimony, please contact Cornelia Ashby at (202) 512-7215 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Elizabeth Morrison, Assistant Director, Sheranda Campbell, and Nagla’a El-Hodiri. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
This testimony presents information on the District of Columbia's (D.C. or the District) progress in reforming its public school system. The District's school system has had long-standing problems with student academic performance, the condition of school facilities, and its overall management. The District's public schools have fallen well behind the District's own targets for demonstrating adequate yearly progress toward meeting the congressionally mandated goal of having 100 percent of students proficient in math, reading, and science by 2014, as outlined in the Elementary and Secondary Education Act of 1965, as amended by the No Child Left Behind Act (NCLBA). In addition, the U. S. Department of Education (Education) designated the District as a high-risk grantee in April 2006 because of its poor management of federal grants. Of the nearly $762 million the District spends on D. C. public schools (DCPS), 16 percent comes from federal sources. In an effort to address the school system's long-standing problems, the Council of the District of Columbia (D.C. Council) approved the Public Education Reform Amendment Act of 2007 (Reform Act), which made major changes to the operations and governance of the school district. The Reform Act gave the Mayor broad authority over the District's public school system, including curricula, operations, budget, personnel, and school facilities. In doing so, the District joined a growing number of cities to adopt mayoral governance of public school systems in an effort to expedite major reforms. The Reform Act transferred the day-to-day management of the public schools from the Board of Education to the Mayor and placed DCPS under the Mayor's office as a cabinet-level agency. It also moved the state functions into a new state superintendent's office, established a separate facilities office, and created the D.C. Department of Education headed by the Deputy Mayor for Education. Because of the broad changes in governance, Congress asked GAO to evaluate the District's reform efforts. In our report, we addressed the following questions: (1) What steps has the District taken to address student academic achievement? (2) What actions has the District taken to strengthen the quality of teachers and principals? (3) To what extent have the District's education offices developed and implemented long-term plans and how has DCPS used stakeholder input in key initiatives? (4) What steps have DCPS and the state superintendent's office taken to improve their accountability and performance? DCPS's early efforts to improve student achievement focused on implementing initiatives to improve student performance, including implementing a new staffing model; restructuring underperforming schools; and creating and enhancing data systems. During the first 2 years of its reform efforts, DCPS implemented several classroom-based initiatives to improve students' basic skills in core subjects. For example, to improve students' basic skills and standardized test scores in reading and math, DCPS introduced targeted interventions for students struggling in these subjects and provided additional instruction and practice to improve students' responses to open-ended questions, including test questions. DCPS is also attempting to improve the quality of its teacher and principal workforce by hiring new teachers and principals and by providing professional development, but it has encountered challenges in effectively implementing these changes. DCPS focused on a workforce replacement strategy to strengthen teacher and principal quality. After the 2007-2008 school year, about one-fifth of the teachers and one-third of the principals resigned, retired, or were terminated from DCPS. DCPS terminated about 350 teachers and an additional 400 teachers accepted financial incentives offered by DCPS to resign or retire in the spring of 2008. In addition, DCPS did not renew the contracts of 42 principals. To replace the teachers and principals who left the system, DCPS launched a nationwide recruitment effort for the 2008-2009 school year and hired 566 teachers and 46 principals for the 2008-2009 school year. The state superintendent's office and DCPS each developed 5-year strategic plans and involved stakeholders in developing these plans. DCPS released the draft of its 5-year strategic plan in late October 2008. In contrast to the state-level plan which includes the public charter schools, the DCPS plan is specific to prekindergarten through grade 12 education in its 128 schools. DCPS recently increased its efforts to involve stakeholders in various initiatives; however, it has not always involved stakeholders in key decisions and initiatives. DCPS and the state superintendent's office also have taken steps to improve accountability and performance of their offices. While DCPS has taken steps to improve accountability and link its individual performance management system to organizational goals, it has not yet linked its employee expectations and performance evaluations to organizational goals. DCPS has taken steps to improve accountability and performance of its central office. To improve accountability for central office departments, DCPS developed departmental scorecards to identify and assess performance expectations for each department. The state superintendent's office also implemented a new performance management system, effective October 2008, to hold its employees accountable and improve the office's performance. The office is converting to a single electronic management system to track and evaluate employee performance by December 2009.
|
DOD defines its training infrastructure to include billeting, mess facilities, classrooms, equipment, software packages, and instructors used to provide, facilitate, or support training of the military forces. There are essentially three types of training: unit training, civilian, and formal training and education for military personnel. Unit training consists of military mission-type training performed at the unit level under the control of the unit commander. Civilian personnel training consists of various training courses offered to civilian personnel to enhance their job functions. This type of training does not have a formal training structure and, therefore, does not have a definable training infrastructure. The third type of training—formal education and training of military personnel—has a definable training infrastructure and is managed by the services’ training commands. Our review focused on the third type of training. DOD has the following six categories of formal training and education programs for military personnel. Recruit training: includes introductory physical conditioning and basic military indoctrination and training. One-station unit training: an Army program that combines recruit and specialized skill training into a single course. Officer acquisition training: includes all types of education and training leading to a commission in one of the services. Specialized skill training: provides officer and enlisted personnel with initial job qualification skills or new or higher levels of skill in their current military specialty or functional area. Flight training: provides the flying skills needed by pilots, navigators, and naval flight officers. It does not include formal advanced flight training, which is provided by the services’ advanced flight training organizations. Professional development education: includes educational courses conducted at the higher-level service schools or at civilian institutions to broaden the outlook and knowledge of senior military personnel or to impart knowledge in advanced academic disciplines. Analysis of DOD’s end strengths, training workloads, and overall training budgets between fiscal years 1987 and 1995 showed that end strengths and training workloads have decreased at much greater rates than the training budget. Between fiscal years 1987 and 1995, the number of Army, Navy, Marine Corps, and Air Force active duty personnel decreased from about 2.2 million to about 1.5 million—a reduction of about 30 percent. During the same period, the training workloads for formal training and education programs decreased from about 248,000 to about 178,000—a reduction of about 28 percent. However, military personnel funding, which is used to pay military students, instructors, and training support and management personnel, decreased by only about 15 percent, and operation and maintenance (O&M) funding, which is used to pay DOD civilian and contractor instructors and to operate, maintain, and support training facilities and equipment, increased about 30 percent. Figure 1 shows trends in military end strengths, training workloads, and funding between fiscal years 1987 and 1995. Training workload and funding information is broken out by the six formal training and education categories in appendix II. As shown above, the decreases in military end strengths and training workloads are fairly consistent over the period. However, the funding trends—especially the increase in O&M funds—are at variance with the downward trends for military end strengths and the training workloads. On a per student training year basis, the fiscal year 1987 cost per student is $53,194 and for fiscal year 1995 is $72,546. When the fiscal year 1987 rate is inflated to fiscal year 1995 dollars, the fiscal year 1987 per student cost is $68,354, or about $4,192 less than the actual cost in fiscal year 1995. This cost differential, when multiplied by the fiscal year 1995 training workload, shows that since fiscal year 1987, training costs have increased about $745 million more than normal inflation even though the training workload has decreased. Officials told us that the increase in O&M training funding was due primarily to the increased use of contractor personnel to teach the courses that were previously taught by the military services and paid for with military personnel appropriations funds. Other reasons included (1) increased use of private-sector facilities, (2) civilian personnel pay increases, (3) increased costs of operating training bases and facilities, and (4) temporary-duty allowances or permanent change of station costs for students and training personnel. Officials attributed the smaller reduction in military personnel funding mainly to increases in military pay and allowances for students and military personnel supporting formal training and education activities. Cost data was not available that would allow us to determine the extent to which each of the above reasons affected costs. Without this type of information, it was not possible to determine whether decisions affecting the current or planned method of providing training are the correct decisions or whether some alternative means of providing the same training would be more cost effective. Actions already implemented or planned for implementation by the services, DOD, and the BRAC over the next several years are expected to further reduce and streamline the training infrastructure for military personnel by reducing the number of locations at which a service teaches a particular course; increasing interservice training for similar curricula; increasing the number of private sector instructors, courses, and training facilities; and closing or realigning bases at which formal training is now provided. According to DOD officials, many of the actions to reduce and streamline the training infrastructure are still ongoing and the effect of these actions will not be known until after fiscal year 1996. Consequently, we could not quantify either the expected reduced infrastructure or the savings. Adding to the difficulties of evaluating DOD’s planned and ongoing actions is the lack of a plan to guide and measure progress in terms of how much reduction is needed, how will the reductions be achieved, what will they cost, and when will they be accomplished. The number of locations at which training is provided decreased from 265 to 172 from fiscal years 1987 to 1995, as shown in table 1. As shown in the table, the number of formal training locations has decreased rather significantly, with professional education being the area where the largest decreases occurred. In certain cases, the reductions were achieved by redefining the courses and consolidating the training locations. For example, the Marine Corps decreased the number of its professional education courses from 17 to 6 by redefining and renaming the courses and reducing the number of training locations. Since 1972, the services have participated in a voluntary process conducted by the Interservice Training Review Organization (ITRO) to identify opportunities to consolidate and/or collocate existing initial skills training. Between 1972 and 1992, ITRO focused primarily on individual courses rather than all courses in a functional training area—families of similar types of tasks and training courses. DOD estimated that ITRO’s recommended consolidations and collocations of training courses have resulted in approximately $300 million in savings. In 1993, in response to a Commission on Roles and Mission recommendation, the Chairman of the Joint Chiefs of Staff directed ITRO to conduct a thorough review of all initial and follow-on technical training to identify additional areas for consolidation and/or collocation. ITRO’s Military Training Structure Review, which was completed in 1995, identified opportunities to reduce the number of training locations for 10 functional areas from 35 to 18, involving 101 courses as shown in table 2. Based on DOD projections, most of the recommended course consolidations and collocations will not be implemented until fiscal year 1996 or later. DOD estimates that full implementation of the recommendations for the functional areas would result in a one-time savings of about $2.4 million and annual recurring savings of about $680,000. According to Marine Corps officials, when all the training consolidations are completed, about 77 percent of all Marine Corps formal school training will be conducted at other service locations. In addition to these reductions in training locations, ITRO projects additional savings will be achieved based on its recommendations for the communications functional area. Although the number of training locations will remain the same, ITRO projects that its proposed location changes will achieve a one-time savings of approximately $2 million and annual recurring savings of about $6.6 million. Data, however, was not available to enable us to confirm those projections. To date, DOD officials noted that the Navy has been the most active user of private-sector instructors, replacing about 700 of its military instructors with contractor personnel and exploring opportunities to further privatize additional courses and instructor positions. The Navy’s goal is to replace an additional 2,000 military instructors with private-sector instructors. DOD and service officials told us that the services, on a very limited basis, contract with community colleges and universities to provide training to their personnel. However, DOD officials said that they could not quantify the extent to which the services use private-sector instructors and facilities. Additionally, DOD and service officials have expressed concerns about contractor-provided training in a civilian environment, particularly for newly enlisted personnel. The service officials believe they need to maintain a military environment for new personnel. The officials said that the services are more receptive to contractor-provided training for follow-on training and professional development education because by the time the military personnel are ready for these advanced courses, they have been acclimated to the military environment. The officials also expressed concerns about the lack of flexibility in using contractor personnel, noting that factors such as deployments and changes to training requirements frequently require changes to training schedules. If contractor personnel are providing the training, changes of this type result in contract adjustments, which often translate into more money. Service officials pointed out, however, that contractor-provided training is advantageous when the required training equipment is expensive, the training course is offered infrequently, and the number of attendees is relatively small. DOD, as part of a recommendation by the 1995 Commission on Roles and Missions, is looking for additional opportunities to privatize training functions. To provide technical assistance in this process, DOD contracted with the Logistics Management Institute. At the time we completed our review in January 1996, the effort had not been completed. Consequently, we could not quantify the additional opportunities for privatization or the savings that such actions would produce. Since 1987, BRAC has recommended base closures and mission realignments that, when fully implemented, will reduce the number of locations where the services provide formal training for military personnel. As shown in table 3, the Commission has recommended 25 mission realignments and 17 installation closures that impact where the services provide formal military training. Despite the BRAC actions, DOD senior officials recognized that excess infrastructure would remain even after completion of the 1995 BRAC round. The Chairman of the Joint Chiefs of Staff, on March 1, 1995, testified before the BRAC Commission that excess capacity would remain after the 1995 BRAC. He cited the need for future base closure authority and said that opportunities remain regarding cross-servicing, particularly in the area of joint-use bases and training facilities. Our examination of the 1995 BRAC recommendations identified several Army training related installations with relatively low military value that were not proposed for closure because of the up-front closure costs, despite projecting savings in the long term. The Navy’s analysis indicated that its primary pilot and advanced helicopter training requirements were 19 to 42 percent below peak historic levels. However, BRAC 1995 did little to change this situation because only one Navy air training facility was slated for realignment, none for closure. Further, the services could not agree on an alternative for consolidating rotary wing training at one central location. As a result, they were left with capacity for rotary-wing training that was more than twice the ramp space needed. According to service training officials, if downsizing continues, it will be more difficult to eliminate any excess training capacity that is identified now that the BRAC process is over. We recommend that the Secretary of Defense direct the DOD Comptroller, as part of the Department’s efforts to improve its finance and accounting systems, to provide for the centralized accumulation and tracking of information on institutional training costs. As a minimum, such information should capture and report the costs in each category in terms of military and civilian instructors, student stipends, facilities, contractor-provided services, and base O&M for the training facilities. This information would allow decisionmakers to evaluate the cost of each alternative when deciding the best method for providing training in each category. We also recommend that the Secretary of Defense develop a long-range plan to guide and measure the services’ efforts to reduce the training infrastructure. The plan should identify (1) how much the training infrastructure should be reduced, (2) how the reductions will be achieved, (3) what it will cost to achieve the reductions, and (4) when the reductions will be accomplished. We further recommend that the Secretary of Defense develop a plan that identifies how DOD will deal with excess installations and facilities that are being funded by the training account after the BRAC process is completed. DOD did not agree with our recommendations. It said that the recommendation to improve its finance and accounting system to accumulate and track cost data on institutional training would incur additional unnecessary costs, be incompatible with existing financial data systems, and would require rule-of-thumb allocations of facilities and training resources. We agree that the accumulation of such cost data may be incompatible with DOD’s existing systems; however, as it goes forward with its efforts to improve the existing systems, DOD should make adjustments to accumulate training cost data. Without such data, DOD cannot determine whether the current method of providing training is the most cost effective or whether an alternative method would be more cost effective. DOD also did not agree with our recommendation for developing a long-range plan that would set out how much the training infrastructure should be reduced, how the reductions will be achieved, what it will cost to achieve the reductions, and when the reductions will be accomplished. DOD officials said that they already assess the services’ plans for accomplishing their training requirements as part of the annual budget process and Future Years Defense Program. They said that the report assumes that further infrastructure reductions can be made and that the report does not adequately consider the reduction initiatives already accomplished or in process. The officials said that they were not convinced that further reductions are possible and were unsure how to go about setting long-term reduction objectives. The officials also said that the report does not recognize factors that could increase the need for training resources even though there has been a reduction in military end strength and accessions. DOD is correct that we believe further training infrastructure reductions are possible. As our report notes, DOD continues to seek opportunities for reductions and DOD officials have testified that further reductions are possible. With regard to a possible need for additional training resources, even with a reduction in end strength and accessions, our analysis of the Future Years Defense Program shows that training costs remain fairly constant with a slight decrease during the program period. We do not agree with DOD’s position regarding establishing long-term infrastructure reduction objectives. In our opinion, unless DOD establishes objectives that set forth how much the infrastructure should be reduced, how the reductions will be accomplished, what it will cost, and when it will be accomplished, it will not know when it has reached the optimal infrastructure size. Reviewing and assessing training requirements on an annual basis as part of the budget process will not accomplish these objectives. Regarding our recommendation that a plan be developed that shows how DOD will deal with excess training installations after the BRAC process is completed, DOD said that the report provides little data and no examples to support this recommendation. DOD is correct that our report does not identify excess installations or facilities. It was not our intent to single out specific facilities as being excess to the training needs. The intent of the recommendation was to develop a process that DOD could use when it identifies excess training installations and facilities. Throughout our review, a common concern expressed by training officials responsible for managing and providing the training was that after the BRAC process is completed, there would still be excess training facilities and installations. The officials said that it will become extremely difficult to dispose of the unneeded facilities in the absence of a BRAC-like process. The complete text of DOD’s comments are in appendix III. Because DOD has indicated that it will not take action to correct the problems we have identified, and the problems are significant, Congress may wish to ensure that DOD address the identified problems. We are sending copies of this report to the Secretary of Defense and the Secretaries of the Army, the Navy, and the Air Force; the Director of the Office of Management and Budget; the Chairmen and Ranking Minority Members of the House and Senate Committees on Appropriations, Senate Committee on Armed Services, and House Committee on National Security; and other interested congressional committees. Copies will also be made available to others upon request. Please contact me on (202) 512-5140 if you have any questions concerning this report. Major contributors to this report are listed in appendix IV. To determine the size of the Department of Defense (DOD) training infrastructure in fiscal year 1995 and what changes have occurred to it since fiscal year 1987, we interviewed and obtained documentation from personnel in the Office of the Under Secretary of Defense and the training commands of the four services. In addition, we obtained and analyzed information from the Defense Manpower Data Center (DMDC) on military end strengths, student entrants into the six formal training and education categories, and funding through the operation and maintenance (O&M) and the military personnel appropriations. To identify specific changes in the number of locations where formal training and education were provided, we compared the breakouts of the training facilities shown in DMDC’s Military Manpower Training Reports for fiscal years 1987 and 1995. To identify actions taken since fiscal year 1987 to reduce the training infrastructure, we interviewed DOD and the services’ training command officials and analyzed information on course offerings, locations, and attendance for fiscal years 1987 and 1995. We also obtained and analyzed internal studies performed to identify opportunities to consolidate and collocate training facilities and courses. Additionally, we held discussions with responsible officials to determine what future plans and initiatives DOD has to further the privatization of military training. Along these same lines, we assessed the impact of the Base Realignment and Closures’ (BRAC) recommendations on the DOD training infrastructure by comparing the Commission’s recommended closures and realignments to the list of installations where formal training and education were being provided in fiscal year 1987. We also held discussions with service officials to identify the specific actions and training reorganizations taken by the services to comply with BRAC recommendations. We performed our review at the Office of the Joint Chiefs of Staff, Joint Exercise and Training Division, Office of the Under Secretary of Defense, Personnel and Readiness, Headquarters, Air Education and Training Command, Randolph Air Force Headquarters, U.S. Army Training and Doctrine Command, Fort Monroe, Office of the Chief of Naval Education and Training, Naval Air Station, Pensacola, Florida; and Marine Corps Combat Development Command, Training and Education Division, Marine Corps Base, Quantico, Virginia. We performed our review from July 1995 to February 1996 in accordance with generally accepted government auditing standards. (37.3) (28.0) (22.0) (33.5) (59.2) (45.5) (8) (30.2) (20.8) (39) (31.0) Costs not directly allocated to individual training categories (36.8) (35.4) (8.9) (0.4) (26.3) For specialized skill and professional development training, the student workload figures are somewhat understated in 1987 because they do not include all Air Force programs now reported in 1995. In addition, some reported data has been realigned to different reporting categories since 1987; that is, the Air Training Command Noncommissioned Officer Academy student production was reported as specialized skill training in 1987 but is now reported under professional development. Sharon A. Cekala Robert J. Lane Robert L. Self W. Bennett Quade Irene A. Robertson The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO reviewed the Department of Defense's (DOD) efforts to reduce its formal training infrastructure, focusing on: (1) the size of the active forces' formal training infrastructure; and (2) planned, completed, or ongoing plans to reduce this infrastructure. GAO found that: (1) the formal military training and education cost per student increased about $4,200 between fiscal years (FY) 1987 and 1995; (2) despite a decrease in the training workload, training costs have increased about $745 million more than normal inflation between FY 1987 and 1995; (3) DOD officials reported that the main reason for the increase is the use of private-sector and civilian instructors; (4) planned or ongoing actions to reduce training infrastructure include reducing the number of locations at which a service teaches a particular course, increasing interservice training for similar curricula, increasing the number of private-sector instructors, courses, and training facilities, and closing or realigning bases that provide formal training; (5) DOD lacks an overall plan to guide and measure training infrastructure reduction (6) the number of locations that provide training decreased from 265 to 172 between FY 1987 and 1995; (7) DOD estimated that increases in interservice training have resulted in about $300 million in savings and that future course consolidations and collocations would result in one-time savings of about $4.4 million and annual recurring savings of about $7.28 million; and (8) despite expected base closures and mission realignments, DOD expects that excess training infrastructure will continue to exist.
|
Internal control is not one event, but a series of actions and activities that occur throughout an entity’s operations and on an ongoing basis. Internal control should be recognized as an integral part of each system that management uses to regulate and guide its operations rather than as a separate system within an agency. In this sense, internal control is management control that is built into the entity as a part of its infrastructure to help managers run the entity and achieve their goals on an ongoing basis. Section 3512 (c), (d) of Title 31, U.S. Code (commonly known as the Federal Managers’ Financial Integrity Act of 1982 (FMFIA)), requires agencies to establish and maintain internal control. The agency head must annually evaluate and report on the control and financial systems that protect the integrity of federal programs. The requirements of FMFIA serve as an umbrella under which other reviews, evaluations, and audits should be coordinated and considered to support management’s assertion about the effectiveness of internal control over operations, financial reporting, and compliance with laws and regulations. Office of Management and Budget (OMB) Circular No. A-123, “Management’s Responsibility for Internal Control” (revised Dec. 21, 2004), provides the implementing guidance for FMFIA, and sets out the specific requirements for assessing and reporting on internal controls consistent with the internal control standards issued by the Comptroller General of the United States. The circular, which was revised in 2004 with the revisions effective for fiscal year 2006, defines management’s responsibilities related to internal control and the process for assessing internal control effectiveness, and provides specific requirements for conducting management’s assessment of the effectiveness of internal control over financial reporting. The circular requires management to annually provide assurances on internal control in its Performance and Accountability Report, and for the 24 Chief Financial Officers (CFO) Act agencies to include a separate assurance on internal control over financial reporting, along with a report on identified material weaknesses and corrective actions. The circular also emphasizes the need for integrated and coordinated internal control assessments that synchronize all internal control-related activities. FMFIA requires GAO to issue standards for internal control in the federal government. GAO’s Standards for Internal Control in the Federal Government provides the overall framework for establishing and maintaining internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. As summarized in GAO’s Standards for Internal Control in the Federal Government, the minimum level of quality acceptable for internal control in the government is defined by the following five standards, which also provide the basis against which internal controls are to be evaluated: Control environment: Management and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. Risk assessment: Internal control should provide for an assessment of the risks the agency faces from both external and internal sources. Control activities: Internal control activities help ensure that management’s directives are carried out. The control activities should be effective and efficient in accomplishing the agency’s control objectives. Information and communications: Information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. Monitoring: Internal control monitoring should assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. The third control standard—internal control activities—helps ensure that management’s directives are carried out. Control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives. In other words, they are the activities conducted in the everyday course of business that accomplish a control objective, such as ensuring IRS employees successfully complete background checks prior to being granted access to taxpayer information and receipts. As such, control activities are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achievement of effective results. A key objective in our annual audits of IRS’s financial statements is to obtain reasonable assurance about whether IRS maintained effective internal controls with respect to financial reporting, including safeguarding of assets, and compliance with laws and regulations. While all five internal control standards are critical and are used by us as a basis for evaluating the effectiveness of IRS’s internal controls, we place a heavy emphasis on testing control activities. This has resulted in the identification of issues in certain internal controls over the years and recommendations for corrective action. To accomplish our objectives, we evaluated the effectiveness of IRS’s corrective actions implemented in response to open recommendations during fiscal year 2006 as part of our fiscal years 2006 and 2005 financial audits. To determine the current status of the recommendations, we (1) obtained the status of each recommendation and corrective action taken or planned as of April 2007, as reported to us by IRS and (2) compared IRS’s assessment to our fiscal year 2006 audit findings to identify any differences between IRS’s and our conclusions regarding the status of each recommendation. In order to determine how these recommendations fit within IRS’s management and internal control structure, we compared the open recommendations, and the issues that gave rise to them, to the control activities listed in GAO’s Standards for Internal Control in the Federal Government and to the list of major factors and examples outlined in our Internal Control Management and Evaluation Tool. We also considered how the recommendations and the underlying issues were categorized in our prior reports, whether IRS had addressed, in whole or in part, the underlying control issues that gave rise to the recommendations, and other legal requirements and implementing guidance, such as OMB Circular No. A-123; FMFIA; and the Federal Information System Controls Audit Manual (FISCAM), GAO/AIMD-12.19.6 (revised June 2001). We conducted our review from December 2006 through April 2007 in accordance with U. S. generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Internal Revenue or his designee on May 7, 2007. We received comments from IRS on May 18, 2007. IRS continues to make progress on addressing its significant financial management challenges. Over the years since we first began auditing IRS’s financial statements in fiscal year 1992, we have closed out over 200 financial management-related recommendations we made based on actions IRS has taken to improve its internal controls and operational efficiency. This includes 25 recommendations we are closing based on actions IRS took during the period covered by our fiscal year 2006 financial audit. At the same time, however, our audits continue to identify internal control issues, resulting in our making further recommendations for corrective action, including 28 new financial management-related recommendations resulting from our fiscal year 2006 financial audit. These internal control issues, and the resulting recommendations, can be directly traced to the control activities in GAO’s Standards for Internal Control in the Federal Government. As such, it is essential that they be fully addressed and resolved to strengthen IRS’s overall financial management and to assist it in achieving its goals and mission. In June 2006, we issued a report on the status of IRS’s efforts to implement corrective actions to address financial management recommendations stemming from our fiscal year 2005 and prior year financial audits and other financial management-related work. In that report, we identified 72 audit recommendations that at that time remained open and thus required corrective action by IRS. A significant number of these recommendations had been open for several years, either because IRS had not taken corrective action or because the actions taken had not fully and effectively resolved the issues that gave rise to the recommendations. IRS continued to work to address many of the internal control issues to which these open recommendations relate. In the course of performing our fiscal year 2006 financial audit, we identified numerous actions IRS took to address many of its internal control issues. On the basis of IRS’s actions, which we were able to substantiate through our audit, we are able to close 25 of these prior years’ recommendations. IRS considers another 26 of the prior years’ recommendations to be effectively addressed. However, we still consider them to be open either because we had not yet been able to verify the effectiveness of IRS’s actions—they occurred subsequent to completion of our audit testing and thus have not been verified, which is a prerequisite to our closing a recommendation—or because the actions taken did not fully address the issue that gave rise to the recommendation. However, continued efforts are needed by IRS to address its internal control issues. While we are able to close 25 financial management recommendations made in prior years, 47 recommendations from prior years remain open, a significant number of which have been outstanding for several years. In some cases, IRS may have effectively addressed the issues that gave rise to the recommendations subsequent to our fiscal year 2006 audit testing. However, in many cases, our fiscal year 2006 audit determined that the actions taken to date had not fully and effectively addressed the underlying internal control issues. Additionally, during our audit of IRS’s fiscal year 2006 financial statements, we identified additional issues that will require corrective action by IRS. In two recent management reports to IRS, we discussed these issues, and made 28 new recommendations to IRS to address these new issues. Consequently, a total of 75 financial management-related recommendations are currently open and need to be addressed by IRS. Of these, we consider 66 short term and 9 long term. Appendix I presents a list of (1) recommendations we have made based on our financial statement audits and other financial management-related work that we had not previously reported as closed prior to our fiscal year 2006 audit, (2) the status of each of these recommendations and corrective actions taken or planned as of April 2007 as reported to us by IRS, and (3) our analysis of whether the issues that gave rise to the recommendations have been effectively and fully addressed based on the work performed during our fiscal year 2006 financial statement audit. The appendix also lists new recommendations we have made based on our fiscal year 2006 financial statement audit. The appendix lists the recommendations by the date on which the recommendation was made and by report number. Linking the open recommendations from our financial audits and other financial management-related work, and the issues that gave rise to them, to internal control activities that are central to IRS’s tax administration responsibilities provides insight regarding their significance. Control activities, one of the five broad standards contained in GAO’s Standards for Internal Control in the Federal Government, are the policies, procedures, techniques, and mechanisms that enforce management’s directives. As such, they are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achievement of results. GAO’s Standards for Internal Control in the Federal Government defines 11 control activities. These control activities can be further grouped into three broad categories: Safeguarding of assets and security activities, including physical control over vulnerable assets, segregation of duties, controls over information processing, and access restrictions to and accountability for resources and records. Proper recording and documenting of transactions, including appropriate documentation of transactions and internal control, accurate and timely reporting of transactions and events, and proper execution of transactions and events. Effective management review and oversight, including reviews by management at the functional or activity level, establishment and review of performance measures and indicators, management of human capital, and top-level reviews of actual performance. Each of the open recommendations from our financial audits and financial management-related work, and the underlying issues that gave rise to them, can be traced back to the 11 control activities and their three broad categories. Table 1 presents a summary of the open recommendations, each tied back to the control activity to which it relates. As table 1 indicates, 19 recommendations (25 percent) relate to issues associated with IRS’s lack of effective controls over safeguarding of assets and security activities. Another 33 recommendations (44 percent) relate to issues associated with IRS’s inability to properly record and document transactions. The remaining 23 open recommendations (31 percent) relate to issues associated with the lack of effective management review and oversight. On the following pages, we group the 75 open recommendations under the control activity to which the condition that gave rise to them most appropriately fits. We first define each control activity as presented in GAO’s Standards for Internal Control in the Federal Government and briefly identify some of the key IRS operations that fall under that control activity. Although not comprehensive, the descriptions are intended to help explain why actions to strengthen these control activities are important for IRS to effectively carry out its overall mission. For each recommendation, we also indicate whether it is a short-term or long-term recommendation. Given IRS’s mission, the sensitivity of the data it maintains, and its processing of trillions of dollars of tax receipts each year, one of the most important control activities at IRS is the safeguarding of assets. Internal control in this important area should be designed to provide reasonable assurance regarding prevention or prompt detection of unauthorized acquisition, use, or disposition of an agency’s assets. We have grouped together the four control activities in GAO’s Standards for Internal Control in the Federal Government that relate to safeguarding of assets (including tax receipts) and security activities (such as limiting access to only authorized personnel): (1) physical control over vulnerable assets, (2) segregation of duties, (3) controls over information processing, and (4) access restrictions to and accountability for resources and records. An agency must establish physical control to secure and safeguard vulnerable assets. Examples include security for and limited access to assets such as cash, securities, inventories, and equipment which might be vulnerable to risk of loss or unauthorized use. Such assets should be periodically counted and compared to control records. IRS is charged with collecting over $2 trillion in taxes each year, a significant amount of which is collected in the form of checks and cash accompanied by tax returns and related information. IRS collects taxes both at its own facilities as well as at lockbox banks that operate under contract with the Department of the Treasury’s Financial Management Service (FMS) to provide processing services for certain taxpayer receipts for IRS. IRS acts as custodian for (1) the tax payments it receives until they are deposited in the General Fund of the U.S. Treasury and (2) the tax returns and related information it receives until they are either sent to the Federal Records Center or destroyed. IRS is also charged with controlling many other assets, such as computers and other equipment, but IRS’s legal responsibility to safeguard tax returns and the confidential information taxpayers provide on tax returns makes the effectiveness of its internal controls with respect to physical security essential. IRS receives cash and checks mailed to its service centers or lockbox banks with accompanying tax returns and information or payment vouchers and payments made in person at one of its offices. While effective physical safeguards over receipts should exist throughout the year, it is especially important during the peak tax filing season. Each year during the weeks preceding and shortly after April 15, an IRS service center campus (SCC) or lockbox bank may receive and process daily over 100,000 pieces of mail containing returns, receipts, or both. The dollar value of receipts each service center and lockbox bank processes increases to hundreds of millions of dollars a day during the April 15 time frame. Of our 75 open recommendations, the following 12 open recommendations are designed to improve IRS’s physical controls over vulnerable assets. All are short-term in nature. (See table 2.) Key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error or fraud. This should include separating the responsibilities for authorizing transactions, processing and recording them, reviewing the transactions, and handling any related assets. No one individual should control all key aspects of a transaction or event. IRS employees are responsible for processing trillions of dollars of tax receipts each year, of which hundreds of billions are received in the form of cash or checks, and for processing hundreds of billions of dollars in refunds to taxpayers. Consequently, it is critical that IRS maintain appropriate separation of duties to allow for adequate oversight of staff and protection of these vulnerable resources so that no single individual would be in a position of both causing an error or irregularity, potentially converting the asset to their personal use, and then concealing it. For example, when an IRS field office or lockbox bank receives taxpayer receipts and returns, it is responsible for depositing the cash and checks in a depository institution and forwarding the related information received to an SCC for further processing. In order to adequately safeguard receipts from theft, the person responsible for recording the information from the taxpayer receipts on a voucher should be different from the individual who prepares those receipts for transmittal to the SCC for further processing. The following four open recommendations would help IRS improve its separation of duties, which will in turn strengthen its controls over both tax receipts and refunds. All are short-term in nature. (See table 3.) A variety of control activities are used in information processing. Examples include edit checks of data entered, accounting for transactions in numerical sequences, and comparing file totals with control totals. There are two broad groupings of information systems control—general control (for hardware such as mainframe, network, end-user environments) and application control (processing of data within the application software). General controls include entitywide security program planning, management, and backup recovery procedures, and contingency and disaster planning. Application controls are designed to help ensure completeness, accuracy, authorization, and validity of all transactions during application processing. IRS relies extensively on computerized systems to support its financial and mission-related operations. To efficiently fulfill its tax processing responsibilities, IRS relies extensively on interconnected networks of computer systems to perform various functions, such as collecting and storing taxpayer data, processing tax returns, calculating interest and penalties, generating refunds, and providing customer service. As part of our annual audits of IRS’s financial statements, we assess the effectiveness of IRS’s information security controls over key financial systems, data, and interconnected networks at IRS’s critical data processing facilities that support the processing, storage, and transmission of sensitive financial and taxpayer data. From that effort over the years, we have identified information security control weaknesses that impair IRS’s ability to ensure the confidentiality, integrity, and availability of its sensitive financial and taxpayer data. As of March 2007, there were 48 open recommendations from our information security work designed to improve IRS’s information security controls. Recommendations resulting from our information security work are reported separately and are not included in this report primarily because of the sensitive nature of some of these issues. However, the following open short-term recommendation is related to systems limitations and IRS’s need to enhance its computer programs. (See table 4.) Access to resources and records should be limited to authorized individuals, and accountability for their custody and use should be assigned and maintained. Periodic comparison of resources with the recorded accountability should be made to help reduce the risk of errors, fraud, misuse, or unauthorized alteration. Because IRS deals with a large volume of cash and checks, it is imperative that it maintain strong controls to appropriately restrict access to those assets, the records that track those assets, and sensitive taxpayer information. Although IRS has a number of both physical and information system controls in place, some of the issues we have identified in our financial audits over the years pertain to ensuring that those individuals who have direct access to these cash and checks are appropriately vetted before being granted access to taxpayer receipts and information and to ensuring that IRS maintains effective access security control. The following two open short-term recommendations would help IRS improve its access restrictions to assets and records. (See table 5.) One of the largest obstacles continuing to face IRS management is the agency’s lack of an integrated financial management system capable of producing the accurate, useful, and timely information IRS managers need to assist in making well-informed day-to-day decisions. While IRS is making progress in modernizing its financial management capabilities, it nonetheless continues to face many pervasive internal control weaknesses related to its long-standing systems deficiencies that we have reported each year since we began auditing its financial statements in fiscal year 1992. However, IRS also has a number of internal control issues that relate to recording transactions, documenting events, and tracking the processing of taxpayer receipts or information, which do not depend upon improvements in information systems. We have grouped three control activities together that relate to proper recording and documenting of transactions: (1) appropriate documentation of transactions and internal controls, (2) accurate and timely recording of transactions and events, and (3) proper execution of transactions and events. Internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. The documentation should appear in management directives, administrative policies, or operating manuals and may be in paper or electronic form. All documentation and records should be properly managed and maintained. IRS collects and processes trillions of dollars in taxpayer receipts annually both at its own facilities and at lockbox banks under contract to process taxpayer receipts for the federal government. Therefore, it is important that IRS maintain effective controls to ensure that all documents and records are properly managed and maintained both at its facilities and at the lockbox banks. In addition, IRS must adequately document and disseminate its procedures to ensure that they are available for IRS employees. The following 13 open recommendations would assist IRS in improving its documentation of transactions and internal control procedures. All are short-term in nature (See table 6.) Transactions should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. This applies to the entire process or life cycle of a transaction or event from the initiation and authorization through its final classification in summary records. In addition, control activities help to ensure that all transactions are completely and accurately recorded. IRS is responsible for maintaining taxpayer records for tens of millions of taxpayers in addition to maintaining its own financial records. To carry out this responsibility, IRS often has to rely on outdated computer systems or manual work-arounds. Unfortunately, some of IRS’s recordkeeping difficulties we have reported on over the years will not be addressed until it can replace its aging systems, which is a long-term effort and is dependent on future funding. The following 19 open recommendations would strengthen IRS’s recordkeeping abilities. (See table 7.) Thirteen of these recommendations are short-term, and 6 are long-term. They include some specific recommendations regarding requirements for new systems for maintaining taxpayer records. Several of the recommendations listed affect financial reporting processes, such as subsidiary records and appropriate allocation of costs. Some of the issues that gave rise to certain of our recommendations directly affect taxpayers, such as those involving duplicate assessments, errors in calculating and reporting manual interest, errors in calculating penalties, and recovery of trust fund penalty assessments. About 47 percent of these recommendations are almost 5 years or older and 1 is over 10 years old, reflecting the long-term nature of the resolution of some of these issues. Transactions and other significant events should be authorized and executed only by persons acting within the scope of their authority. This is the principal means of assuring that only valid transactions to exchange, transfer, use, or commit resources and other events are initiated or entered into. Authorizations should be clearly communicated to managers and employees. IRS employs tens of thousands of people in its 10 SCCs, three computing centers, and numerous field offices throughout the United States. In addition, the number of staff increases significantly during the peak of the tax filing season. Because of the tremendous number of personnel involved, IRS must maintain effective control over which employees are authorized to either view or change sensitive taxpayer data. IRS’s ability to establish access rights and permissions for information systems is a critical control. Each year, IRS pays out hundreds of billions of dollars in tax refunds, some of which are distributed to taxpayers manually. IRS requires that all manual refunds be approved by designated officials. However, weaknesses in the authorization of such approving officials expose the federal government to losses because of the issuance of improper refunds. The following open short-term recommendation would improve IRS’s controls over its manual refund transactions. (See table 8.) All personnel within IRS have an important role in establishing and maintaining effective internal controls, but IRS’s managers have additional review and oversight responsibilities. Management must set the objectives, put the control mechanisms and activities in place, and monitor and evaluate controls. Without effective monitoring by managers, internal control activities may not be carried out consistently and on time. We have grouped three control activities together related to effective management review and oversight: (1) reviews by management at the functional or activity level, (2) establishment and review of performance measures and indicators, and (3) management of human capital. Although we also include the control activity “top-level reviews of actual performance” in this grouping, we do not have any open recommendations to IRS related to this internal control activity. Managers need to compare actual performance to planned or expected results throughout the organization and analyze significant differences. IRS has over 80,000 full-time employees and hires over 20,000 seasonal personnel to assist during the tax filing season. In addition, as discussed earlier, Treasury’s Financial Management Service contracts with banks to process tens of thousands of individual receipts, totaling hundreds of billions of dollars. At any organization, management oversight of operations is important, but with an organization as vast in scope as IRS, management oversight is imperative. The following 17 open short-term recommendations would improve IRS’s management oversight. (See table 9.) Many of these recommendations were made to correct instances where an internal control activity either does not exist or where an established control is not being adequately or consistently applied. Several of these recommendations emphasize improvements needed to IRS’s oversight of lockbox banks and contracted courier programs in order to ensure appropriate physical control over vulnerable assets, such as taxpayer receipts. However, a number of these recommendations are aimed at enhancing IRS’s own assessment of its internal controls over financial reporting in accordance with the requirements of the revised OMB Circular No. A-123. Activities need to be established to monitor performance measures and indicators. These controls could call for comparisons and assessments relating different sets of data to one another so that analyses of the relationships can be made and appropriate actions taken. Controls should also be aimed at validating the propriety and integrity of both organizational and individual performance measures and indicators. IRS’s operations include a vast array of activities encompassing educating taxpayers, processing of taxpayer receipts and data, disbursing hundreds of billions of dollars in refunds to millions of taxpayers, maintaining extensive information on tens of millions of taxpayers, and seeking collection from individuals and businesses that fail to comply with the nation’s tax laws. Within its compliance function, IRS has numerous activities, including identifying businesses and individuals that underreport income, collecting from taxpayers that do not pay, and collecting from those receiving refunds for which they are not eligible. Although IRS has at its peak over 100,000 employees, it still faces resource constraints in attempting to fulfill its duties. Because of this, it is vitally important for IRS to have sound performance measures to assist it in assessing its performance and targeting its resources to maximize the government’s return on investment. However, in past audits we have reported that IRS did not capture costs at the program or activity level to assist in developing cost-based performance measures for its various programs and activities. As a result, IRS is unable to measure the costs and benefits of its various collection and enforcement efforts to best target its available resources. Additionally, we have reported that IRS’s controls over its reporting of interim performance measurement data were not effective throughout the year because the data reported at interim periods for certain performance measures were either not accurate or were outdated. The following three open recommendations are designed to assist IRS in evaluating its operations, determining which activities are the most beneficial, and establishing a good system for oversight. (See table 10.) These recommendations—two long-term and one short-term—call for IRS to measure, track, and evaluate the cost, benefits, or outcomes of its operations—particularly with regard to identifying its most effective tax collection activities. Effective management of an organization’s workforce—its human capital—is essential to achieving results and an important part of internal control. Management should view human capital as an asset rather than a cost. Only when the right personnel for the job are on board and are provided the right training, tools, structure, incentives, and responsibilities is operational success possible. Management should ensure that skill needs are continually assessed and that the organization is able to obtain a workforce that has the required skills that match those necessary to achieve organizational goals. Training should be aimed at developing and retaining employee skill levels to meet changing organizational needs. Qualified and continuous supervision should be provided to ensure that internal control objectives are achieved. Performance evaluation and feedback, supplemented by an effective reward system, should be designed to help employees understand the connection between their performance and the organization’s success. As a part of its human capital planning, management should also consider how best to retain valuable employees, plan for their eventual succession, and ensure continuity of needed skills and abilities. IRS’s operations cover a wide range of technical competencies with specific expertise needed in tax-related matters; financial management; and systems design, development, and maintenance. Because IRS has tens of thousands of employees spread throughout the country, management’s responsibility to keep its guidance up-to-date and its staff properly trained is imperative. The following three open short-term recommendations would assist IRS in its management of human capital. (See table 11.) Increased budgetary pressures and an increased public awareness of the importance of internal control require IRS to carry out its mission more efficiently and more effectively while protecting taxpayers and their information. Sound financial management and effective internal controls are essential if IRS is to efficiently and effectively achieve its goals. IRS has made substantial progress in improving its financial management since its first financial audit, as evidenced by consecutive clean audit opinions on its financial statements for the past 7 years, resolution of several material internal control weaknesses, and the closing of hundreds of financial management recommendations. This progress has been the result of hard work throughout IRS and sustained commitment of IRS leadership. Nonetheless, more needs to be done to fully address the financial management challenges the agency faces. Efforts must continue to address the internal control deficiencies that continue to exist. Effective implementation of the recommendations we have made and continue to make through our financial audits and related work could greatly assist IRS in improving its internal controls and achieving sound financial management. While we recognize that some actions—primarily those related to modernizing automated systems—will take a number of years to resolve, most of our outstanding recommendations can be addressed in the short-term. In commenting on a draft of this report, IRS expressed its appreciation for our acknowledgment of the agency’s progress in addressing its financial management challenges as evidenced by our closure of 25 of the 72 open financial management recommendations from last year’s report. IRS also indicated its continued commitment to work with us to take corrective actions that appropriately address the issues identified in our recommendations. We will review the effectiveness of further corrective actions IRS has taken or will take and the status of IRS’s progress in addressing all open recommendations as part of our audit of IRS’s fiscal year 2007 financial statements. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Appropriations; Senate Committee on Finance; Senate Committee on Homeland Security and Governmental Affairs; and Subcommittee on Taxation, IRS Oversight and Long-Term Growth, Senate Committee on Finance. We are also sending copies to the Chairmen and Ranking Minority Members of the House Committee on Appropriations; and the House Committee on Ways and Means; Chairman and Vice Chairman of the Joint Committee on Taxation; the Secretary of the Treasury; the Director of OMB; the Chairman of the IRS Oversight Board; and other interested parties. Copies will be made available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-3406 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Financial Management: Important IRS Revenue Information Is Unavailable or Unreliable (GAO/AIMD-94-22, Dec. 21, 1993) Open. An internal action plan has been established to improve weaknesses identified. Short-term actions include training for those who calculate interest, increased program reviews to verify adherence to procedures, and establishment of a process to resolve elevated issues. Open. In testing a statistical sample of 45 manual interest transactions recorded during fiscal year 2006, we found eight errors relating to the calculation and recording of manually calculated interest. We estimate that 18 percent of IRS’s manual interest population contains errors and concluded that IRS controls over this area remain ineffective. We will continue to monitor IRS’s actions to address its control weakness in this area and determine whether to test the effectiveness of these controls in future audits. eliminate duplicate or other assessments that have already been paid off to assure that all accounts related to a single assessment are appropriately credited for payments received. (short-term) Internal Revenue Service: Immediate and Long-Term Actions Needed to Improve Financial Management (GAO/AIMD-99-16, Oct. 30, 1998) Open. As of October 2006, the Small Business/Self-Employed (SBSE) division completed nationwide implementation of the new web based Automated Trust Fund Recovery (ATFR) Area Office application and centralized processing of all Trust Fund Recovery Penalty (TFRP) assessments (both automated and manual) at the Ogden campus. In addition, SB/SE conducted an analysis of the ATFR campus component. This analysis resulted in the submission of numerous work requests and Information Technology Asset Management System tickets to address deficiencies found in the current programming. SB/SE met with both Modernization & Information Technology Services (MITS) and the Chief Financial Officer (CFO) and secured concurrence on an action plan. Open. We continue to recognize that automation of the current TFRP is needed. IRS has taken several actions to strengthen controls and correct programming or procedural deficiencies in the cross referencing of payments, including consolidating its TFRP processing at the Ogden campus. However, IRS’s efforts to date have not been fully effective. In fiscal year 2006, we reviewed a statistical sample of 80 TFRP payments, made on accounts created since August 2001. We found nine instances in which IRS did not properly record the payment to all related taxpayer accounts. Of these nine payments, four were not properly recorded to all related accounts even though the accounts contained the required cross-referencing at the time the payments were made. We estimate that 11 percent of these payments may not be properly recorded. We will continue to review IRS’s initiatives to improve posting of TFRP cases and test cases for proper postings to all related accounts as part of our fiscal year 2007 financial audit. modernization blueprint includes developing a subsidiary ledger to accurately and promptly identify, classify, track, and report all IRS unpaid assessments by amount and taxpayer. This subsidiary ledger must also have the capability to distinguish unpaid assessments by category in order to identify those assessments that represent taxes receivable versus compliance assessments and write- offs. In cases involving trust fund recovery penalties, the subsidiary ledger should ensure that (1) the trust fund recovery penalty assessment is appropriately tracked for all taxpayers liable but counted only once for reporting purposes and (2) all payments made are properly credited to the accounts of all individuals assessed for the liability. (short- term) Internal Revenue Service: Immediate and Long-Term Actions Needed to Improve Financial Management (GAO/AIMD-99-16, Oct. 30, 1998) Open. IRS implemented Release 1 of the Custodial Detail Data Base (CDDB) in February 2006 and successfully used it for the fiscal year 2006 audit to classify unpaid assessments by capturing cross- reference information on certain TFRP cases to reduce audit reclassifications a year ahead of schedule. This created the unpaid assessment subsidiary ledger that is to send weekly data to the Interim Revenue Accounting and Control System (IRACS) to post duplicate and non-duplicate TFRP assessments, all financial classifications, and accrued penalty and interest during 2007. IRS also implemented Release 2A in January 2006 and added Revenue Trace ID numbers to all payments in the Electronic Federal Tax Payment System (EFTPS) associating the payments to the deposit tickets at the transaction level for 80 percent of all payments. IRS completed the database design for Release 2B to create a subsidiary ledger for posting revenue receipts to IRACS, and plans to put this into production by October 2007. IRS is developing Release 3 to address the component of the material weakness to create a subsidiary ledger for refunds to IRACS, and to add Trace ID numbers to all remaining pre-posted revenue receipt transactions (i.e., Federal Tax Deposits, Lockbox, and Integrated Submission and Remittance Processing (ISRP). Release 3 is planned to be in production by January 2008. IRS developed requirements and a business case in December 2006 for redesigning IRACS to become United States Government Standard General Ledger (USSGL) and Joint Financial Management Improvement Project (JFMIP) compliant, and we are pursuing funding to complete this work in fiscal year 2009. Open. Although IRS successfully implemented the first release of CDDB during 2006, its capability of functioning as IRS’s custodial subsidiary ledger is still years away. We will continue to monitor IRS’s development of CDDB and will continue to test its effectiveness in classifying TFRP cases in IRS’s unpaid assessment inventory as part of our fiscal year 2007 financial audit. payment receipts are recorded in a control log prior to depositing the receipts in the locked container and ensure that the control log information is reconciled to receipts prior to submission of the receipts to another unit for payment processing. To ensure proper segregation of duties, an employee not responsible for logging receipts in the control log should perform the reconciliation. (short- term) Internal Revenue Service: Physical Security Over Taxpayer Receipts and Data Needs Improvement (GAO/AIMD-99-15, Nov. 30, 1998) Closed. Internal Revenue Manual (IRM) 21.3.4.7.4 was updated on January 20, 2006, to require the review of Form 795 and all supporting documents for accuracy (by an employee other than the recipient of the funds) before they are transmitted to Submission Processing (SP). The review is required in Taxpayer Assistance Centers (TAC) where staffing permits the completion of the review. The staffing requirement is where the group manager, secretary, or initial account representative (IAR) is collocated with other technical employees performing this work. Locations where the review is not administratively feasible will not be completed. Still, Field Assistance (FA) continued its efforts to mitigate circumstances that prevent proper segregation of duties in TACs with limited staffing and, in July 2006, approved a Service-wide Electronic Research Program (SERP) update for IRM 1.4.11.19.5 to require TAC managers to conduct quarterly reviews for payment processing and reconciliation procedures. Each employee is to be reviewed a minimum of two times each quarter and reviews are to be discussed with the employee as an evaluative record of performance. The requirement to conduct the reviews will increase the presence of TAC managers in all TACs, including outlying sites and those with limited staffing. Increased managerial presence and reviews will help mitigate the risks associated with not having segregation of duties in small TACs. Open. During our fiscal year 2006 audit, we found that payment receipts were recorded in a control log. The control log information was agreed to the receipts prior to sending the receipts and control log to the submission processing center for further processing. However, during our subsequent review of IRS’s corrective actions in this area we found that the reviews required in the July 2006 IRM update were not always performed as intended. We will continue to evaluate IRS’s corrective actions to ensure that receipts are processed according to standards and properly segregated among employees during our fiscal year 2007 audit. the factors causing delays in processing and posting TFRP assessments. Once these factors have been determined, IRS should develop procedures to reduce the impact of these factors and to ensure timely posting to all applicable accounts and proper offsetting of refunds against unpaid assessments before issuance. (long-term) Internal Revenue Service: Custodial Financial Management Weaknesses (GAO/AIMD-99-193, Aug. 4, 1999) Open. As of October 2006, SBSE completed nationwide implementation of the new Web based ATFR Area Office application and centralized processing of all TFRP assessments (both automated and manual) at the Ogden campus. In addition, SB/SE conducted an analysis of the ATFR campus component. This analysis resulted in the submission of numerous work requests and Information Technology Asset Management System tickets to address deficiencies found in the current programming. SB/SE met with both MITS and CFO and secured concurrence on an action plan. Open. We continued to find long delays in IRS’s processing and posting of TFRP assessments during our fiscal year 2006 financial audit. In one case, we noted that the revenue officer made the TFRP determination in March of 2003 but IRS did not record the assessment on the officer’s account until February 2006. We will continue to review IRS’s initiatives to improve posting of TFRP assessments and monitor TFRP processing timeliness as part of our fiscal year 2007 audit. review of campus deterrent controls to include similar analyses of controls at IRS field offices in areas such as courier security, safeguarding of receipts in locked containers, requirements for fingerprinting employees, and requirements for promptly overstamping checks made out to “IRS” with “Internal Revenue Service” or “United States Treasury.” Based on the results, IRS should make appropriate changes to strengthen its physical security controls. (short-term) Internal Revenue Service: Custodial Financial Management Weaknesses (GAO/AIMD-99-193, Aug. 4, 1999) Open. To ensure that all TE/GE Examination employees were familiar with the overstamping requirement, TE/GE took a number of steps to educate employees. In addition, all TE/GE Examination groups were required to order a “United States Treasury” stamp, and the directors confirmed that all managers in their areas had ordered the stamps. Finally, TE/GE included in the annual performance plan of critical executives and other managers who oversee examination functions a commitment to implement a 12-point action plan. TE/GE has addressed the safeguarding of receipts through the publication of a “Quick Reference Guide for Processing Checks in TE/GE Examination” and a July 18, 2005 Managers Alert. TE/GE also developed a checklist for use in Examination groups to ensure conformance with GAO’s concerns and this checklist was incorporated in appropriate fiscal year 2006 performance plans. Large and Mid- sized Business (LMSB) issued two memorandums to all field executives on the need for proper endorsement of checks by proper use of this stamp. LMSB required that each field executive “certify” that each group either had in their possession or was able to obtain the stamp. LMSB will be requesting that its Training Branch include this topic when the module on remittance training is presented. LMSB has procedures in place to safeguard receipts. Small Business/Self-Employed (SB/SE) issued a Managers Message regarding overstamping and receipt transmittal (Form 3210) controls in April 2006. SB/SE also updated Collection and Examination IRM procedures regarding overstamping, physical security over remittances, and Form 3210 controls. SB/SE will continue to reinforce these procedures through management communications and other area-level reviews. Open. IRS’s corrective actions affecting the (1) SB/SE, (2) LMSB, and (3) TE/GE tax operating divisions do not entail the recommended expansion of IRS’s current reviews at the service center campuses (SCC) and taxpayer assistance centers. In addition, during our fiscal year 2006 audit, we found a lack of controls over safeguarding taxpayer receipts and information at one SB/SE unit. We will evaluate IRS’s corrective actions during our fiscal year 2007 audit. staff are employed or existing staff appropriately cross- trained to be able to perform the master file extractions and other ad hoc procedures needed for IRS to continually develop reliable balances for financial reporting purposes. (short-term) Internal Revenue Service: Custodial Financial Management Weaknesses (GAO/AIMD-99-193, Aug. 4, 1999) Open. IRS is enhancing the Service’s existing Financial Management Information System (FMIS) with the new CDDB and by pursuing funding to enhance the IRACS to interface with future releases of CDDB and the Customer Account Data Engine (CADE). This will reduce the material weaknesses by improving financial reporting compliance, and making the general ledger system USSGL and JFMIP requirements compliant. This will reduce the level of effort each year for the audit, and reduce the reliance on master file extracts and ad hoc procedures. Contractor support will continue to supplement compensating procedures while CDDB is finalized, and until the IRACS redesign is funded and in development. Open. The objective of this recommendation was to ensure that IRS had appropriate staff resources at key positions to perform the master file extraction and other ad hoc procedures to support IRS’s preparation of the financial statements in the event of staff attrition. In fiscal year 2006, IRS continued to augment its own resources with contractor support to produce auditable financial statements. In addition, during our review of the key master file reconciliations used to support preparation of the financial statements, we found that these reconciliations were hampered when a former IRS staff was no longer available to perform them. We will continue to assess IRS’s actions during our fiscal year 2007 audit. Internal Revenue Service: Serious Weaknesses Impact Ability to Report on and Manage Operations (GAO/AIMD-99-196, Aug. 9, 1999) Open. The Integrated Financial System (IFS), implemented on November 10, 2004, includes a cost module that provides basic cost data to managers. IRS cannot rely on the system as a significant planning and decision-making tool. It will likely require several years and implementation of additional components, such as a workload management system, as well as integration with its tax administration activities, before the full potential of IRS’s cost accounting module will be realized. In the interim, IRS is working on two cost pilots in which it is determining the full cost for two product lines. Open. We will follow up during future audits to assess IRS’s progress in implementing a cost accounting system and populating it with the cost information needed to support meaningful cost- based performance measures. We will also review the cost pilots IRS is developing in the interim. IRS financial systems to include recording plant and equipment (P&E) and capital leases as assets when purchased and to generate detailed records for P&E that reconcile to the financial records. (long- term) Internal Revenue Service: Serious Weaknesses Impact Ability to Report on and Manage Operations (GAO/AIMD-99-196, Aug. 9, 1999) Closed. IFS, implemented on November 10, 2004, property and equipment, including capital leases, are recorded as assets when purchased. During fiscal year 2006, IRS improved the accuracy and reliability of its P&E accounting records by enhancing accounting code definitions, improving coordination, and streamlining analysis of P&E transactions. On the basis of these actions and elimination of the reportable condition on P&E, this recommendation is closed. Open. IRS implemented the first release of IFS on November 10, 2004, which allowed recording the majority of P&E activity as assets when purchased. However, due to ongoing technological advances and budgetary constraints, IRS is no longer committed to implementing additional releases of IFS, which were to include an integrated property asset module. Rather, IRS is considering all other options available to provide these capabilities. We will monitor IRS’s strategy in addressing these financial management system issues. prematurely suspending active collection efforts, and using the best available information, develop reliable cost-benefit data relating to collection efforts for cases with some collection potential. These cost-benefit data would include the full cost associated with the increased collection activity (i.e., salaries, benefits, administrative support), as well as the expected additional tax collections generated. (short-term) Internal Revenue Service: Recommendations to Improve Financial and Operational Management (GAO-01-42, Nov. 17, 2000) Open. IRS’s Collection Governance Council (consisting of executives in SB/SE, W&I, and LMSB), established in August 2005, continues to mature, enhancing coordination across the enterprise for collection issues. The following initiatives will drive improvement in the agency’s collection resource allocation decisions: IRS created a workload delivery model that integrates the work plans of each source of assessment to evaluate the overall impact on downstream collection operations. It also developed a study group, called Corporate Approach to Collection Inventory (CACI), to look at case delivery practices from an overall perspective and make recommendations for changes to case routing and assignment priorities. It also monitors its non-filer strategy and work plan to improve the identification of and selection of non- filer cases, then balances the working of non-filer inventory with balance- due inventory. In addition, IRS has an ongoing project to enhance its decision analytical models used for selecting cases based on their predicted collection potential to apply decision analytics to both delinquent accounts and unfiled returns; apply decision analytics to all categories of taxpayer not just small business, self- employed; expand the use of internal and external data sources to improve the portion of cases predicted by the models; ultimately develop alternative treatment strategies based on the least costly treatment indicated by the models; and update definitions for complex cases to improve routing to field collection. Open. IRS has initiated several projects to build additional analytical models to improve its ability to route cases to the appropriate collection activity and is developing a corporate strategy for working collection cases. We will continue to review IRS’s initiatives to manage resource allocation levels for its collection efforts. Implement procedures to closely monitor the release of tax liens to ensure that they are released within 30 days of the date the related tax liability is fully satisfied. As part of these procedures, IRS should carefully analyze the causes of the delays in releasing tax liens identified by our work and prior work by IRS’s former internal audit function and ensure that such procedures effectively address these issues. (short-term) Internal Revenue Service: Recommendations to Improve Financial and Operational Management (GAO-01-42, Nov. 17, 2000) Open. IRS continues to address the root causes of untimely lien releases such as untimely posting of payments, untimely credit transfers, certificate of release missing from automated lien system/one taxpayer released from lien by bankruptcy discharge, billing support vouchers with no date stamp for proof of mailing, and untimely adjustments. Open. During our fiscal year 2006 audit, we continued to find delays in release of liens. In fiscal year 2006, IRS performed its own test of the effectiveness of its lien release process as part of implementing the requirements of the revised OMB Circular No.A-123 and we reviewed and validated its test results. IRS found 26 instances out of 84 cases tested in which it did not release the applicable federal tax lien within the statutory period. On the basis of these results, IRS estimates that for 31 percent of unpaid tax assessment cases in which it had filed a tax lien that were resolved in fiscal year 2006, IRS did not release the lien within 30 days. The time between the satisfaction of the liability and release of the lien ranged from 44 days to 638 days. We will assess the impact of IRS’s latest actions and continue to review IRS’s release of tax liens as part of our fiscal year 2007 financial audit. Automated Underreporter and Combined Annual Wage Reporting programs, (2) screening and examination of Earned Income Tax Credit claims, and (3) identifying and collecting previously disbursed improper refunds, use the best available information to develop reliable cost- benefit data to estimate the tax revenue collected by, and the amount of improper refunds returned to, IRS for each dollar spent pursuing these outstanding amounts. These data would include (1) an estimate of the full cost incurred by IRS in performing each of these efforts, including the salaries and benefits of all staff involved, as well as any related nonpersonnel costs, such as supplies and utilities and (2) the actual amount (a) collected on tax amounts assessed and (b) recovered on improper refunds disbursed. (long-term) Internal Revenue Service: Recommendations to Improve Financial and Operational Management (GAO-01-42, Nov. 17, 2000) Open. IRS’s cost allocation methodology was reviewed and enhanced for fiscal year 2006 and further refinements will be implemented each year. The first year’s data will be reviewed in fiscal year 2006 and a plan developed for integrating cost data in decision making. The use of the data will be tested in fiscal year 2007 with baseline data. However, to achieve maximum benefit in decision making, several years’ data will be needed. As a result, IRS will fully implement the use of cost accounting data for resource allocation decisions in fiscal year 2008. Open. During our fiscal year 2006 audit, IRS indicated that the objective of the first year’s cost data review process was to make sure the data were accurate, the cost accounting system was working properly, and the data could be used to make budgetary decisions. IRS has indicated that its plan for integrating cost data in the decision-making process will be determined after the baseline data are established. IRS plans to conduct several cost pilots in fiscal year 2007 and intends to use the test data from the pilots to establish the baseline data. IRS has indicated that it is planning to fully implement the use of cost accounting data for resource allocation decisions in fiscal year 2008 to the extent possible. We will continue to follow up on IRS’s progress on this issue during our fiscal year 2007 audit. ledger for leasehold improvements and implement procedures to record leasehold improvement costs as they occur. (long-term) Internal Revenue Service: Recommendations to Improve Financial and Operational Management (GAO-01-42, Nov. 17, 2000) Closed. P&E, including capital leases, are recorded as assets when purchased. During fiscal year 2006, IRS improved the accuracy and reliability of its P&E accounting records by enhancing accounting code definitions, improving coordination, and streamlining analysis of P&E transactions. Based on these actions and elimination of the reportable condition on P&E, this recommendation is closed. Open. IRS implemented the first release of IFS on November 10, 2004, which allowed recording leasehold improvements as assets when purchased. However, due to ongoing technological advances and budgetary constraints, IRS is no longer committed to implementing additional releases of IFS, which were to include an integrated property asset module. Rather, IRS is considering all other options available to provide these capabilities. We will monitor IRS’s strategy in addressing these financial management system issues. 01-39 Develop a mechanism to track and report the actual costs associated with reimbursable activities. (long-term) Management Letter: Improvements Needed in IRS’s Accounting Procedures and Internal Controls (GAO-01-880R, July 30, 2001) Open. IRS has developed guidance for costing reimbursable agreements, which includes instructions on tracking labor. IFS includes a cost module that provides basic cost data to managers. During fiscal year 2006, IRS further improved its methodology for allocating its costs of operations to its business units. This methodology uses the cost accounting module of IFS, allows IRS to accumulate the full costs of operating each business unit, and provides more detail on allocated costs. Actions are ongoing in fiscal year 2007 to begin gathering the actual cost of selected reimbursable projects. Open. We confirmed that IRS has procedures for costing reimbursable agreements that provide the basic framework for the accumulation of both direct and indirect costs at the necessary level of detail. IRS has improved its methodology for allocating its costs of operations to its business units. However, as indicated by IRS, further actions are needed for it to accumulate and report actual costs associated with reimbursable projects. We will continue to monitor IRS’s efforts to fully implement its cost accounting system and, once it has been fully implemented, evaluate the effectiveness of IRS procedures for developing cost information for its reimbursable agreements. Implement policies and procedures to require that all employees itemize on their time cards the time spent on specific projects. (long- term) Internal Revenue Service: Progress Made, but Further Actions Needed to Improve Financial Management (GAO-02-35, Oct. 19, 2001) Open. IRS agreed with the objective of this recommendation, which is to allow it to collect and report the full payroll costs associated with its activities. Most IRS employees already itemize their time charges in functional tracking systems. IFS provides basic cost data to managers. During fiscal year 2006, IRS further improved its methodology for allocating its costs of operations to its business units. This methodology uses the cost accounting module of IFS, allows IRS to accumulate the full costs of operating each business unit, and provides more detail on allocated costs. Open. We confirmed that IRS had improved its cost accounting capability from prior fiscal years. However, the cost accounting module did not provide IRS with the ability to produce full cost information for specific activities and programs. IRS is developing a strategy and action plan to enhance cost data and integrate budget and performance data. We will continue to monitor IRS’s efforts to fully implement its cost accounting system, and, once it has been fully implemented, evaluate the effectiveness of IRS’s procedures for developing cost information to use in resource allocation decisions. Implement policies and procedures to allocate nonpersonnel costs to programs and activities on a routine basis throughout the year. (long-term) Internal Revenue Service: Progress Made, but Further Actions Needed to Improve Financial Management (GAO-02-35, Oct. 19, 2001) Open. IFS provides basic cost data to managers. During fiscal year 2006, IRS further improved its methodology for allocating its costs of operations to its business units. This methodology uses the cost accounting module of IFS, allows IRS to accumulate the full costs of operating each business unit, and provides more detail on allocated costs. Open. We confirmed that IRS has improved its cost accounting capabilities by developing and implementing a methodology for allocating its costs of operations to its business units. However, further actions are needed to enable IRS to allocate nonpersonnel costs associated with specific programs and activities. We will continue to monitor IRS’s efforts to fully implement its cost accounting system and, once it has been fully implemented, evaluate the effectiveness of IRS’s procedures for developing cost information to use in resource allocation decisions. 02-16 Ensure that field office management complies with existing receipt control policies that require a segregation of duties between employees who prepare control logs for walk-in payments and employees who reconcile the control logs to the actual payments. (short-term) Management Report: Improvements Needed in IRS’s Accounting Procedures and Internal Controls (GAO-02-746R, July 18, 2002) Closed. IRM 21.3.4.7.4 was updated on January 20, 2006, to require the review of Form 795 and all supporting documents for accuracy (by an employee other than the recipient of the funds) before they are transmitted to SP. The review is required in TACs where staffing permits. In exploring procedures in January 2006 for TACs with limited staffing where there is no manager, secretary, or IAR, FA determined proposed procedures to be burdensome, difficult to administer, and not administratively feasible (e.g., copying and faxing Form 795 to the manager). Also, based on a September 2005 report by the Treasury Inspector General for Tax Administration (TIGTA) on payments received at TACs (report No. 2005-40-148), 99 percent of payments posted appropriately to taxpayer accounts. This accuracy rate combined with compensating controls at the Submission Processing Centers (SPC) effectively reduces risks associated with not having reconciliation processes in small TACs. Still, FA continued its efforts to mitigate circumstances that prevent proper segregation of duties in TACs with limited staffing and, in July 2006, approved a SERP update for IRM 1.4.11.19.5 to require TAC managers to conduct quarterly reviews for payment processing and reconciliation procedures. Each employee is to be reviewed a minimum of two times each quarter and reviews are to be discussed with the employee as an evaluative record of performance. The requirement to conduct the reviews will increase the presence of TAC managers in all TACs, including outlying sites and those with limited staffing. Increased managerial presence and reviews will help mitigate the risks associated with not having segregation of duties in small TACs. Open. During our fiscal year 2006 audit, we found a lack of segregation of duties related to preparation and review of receipt transmittals (Form 3210) at two of the nine TACs we visited. At these locations, the TAC managers implemented the review process for Forms 3210 on the dates of our internal control test. While the TIGTA report cited by IRS addresses the accuracy of the payments posted, it does not address the risk of payments not being recorded. Segregation of duties is a key control used to reduce the risk of unposted payments due to error and fraud related to revenue receipt transactions. Also, IRS has noted that changes were made to its IRM in July 2006 to require quarterly reviews of payment processing and reconciliation procedures. However, we found that the required reviews were not always performed as intended. We will continue to evaluate IRS’s corrective actions during our fiscal year 2007 audit. 02-18 Work with the National Finance Center (NFC) to resolve the technical limitations that exist within the Security Entry and Tracking System (SETS) database and continue to periodically review SETS data to detect and correct errors. (short-term) Management Report: Improvements Needed in IRS’s Accounting Procedures and Internal Controls (GAO-02-746R, July 18, 2002) Closed. In July 2005, NFC demonstrated a Web-version of SETS and more IRS requirements are to be accommodated in that system. To date, IRS has not received a projected implementation from NFC. Monthly reports are being reviewed and analyzed. Problems are reported to Agency-Wide Shared Services (AWSS) to address with the Department of the Treasury and NFC. AWSS continues to monitor SETS reports for each pay period and coordinates with employment offices when corrections are needed. IRS and NFC continue to engage in ongoing discussions on reconciliations and error adjustments as needed. NFC controls the time- table for deploying a web version of SETS; however, no time table has been set and no meetings are being convened. Open. As of the end of our fiscal year 2006 audit, IRS and NFC had not completed their deployment of the Web-based version of the SETS database. We will continue to monitor IRS’s actions during our fiscal year 2007 audit. management to ensure that envelopes are properly candled and that IRS takes steps to monitor adherence to this requirement. (short-term). Lockbox Banks: More Effective Oversight, Stronger Controls, and Further Study of Costs and Benefits are Needed (GAO-03- 299, Jan. 15, 2003) Closed. Effective October 2005, candling reviews are conducted at all lockbox bank sites to ensure all candling requirements are met. These internal control reviews ensure that envelopes opened (manually or by OPEX) on three or more sides are candled once and that envelopes other than the ones opened on three or more sides are candled twice. The results of these reviews are used to calculate each bank’s performance score. As a result of implementing these measures in the first year, an unfavorable score can result in IRS deeming the subject bank ineligible to bid for new work or additional volume, loss of current work, or placing the bank in a probationary status. Additionally, there were no lockbox findings issued for candling during the fiscal year 2006 financial statement audit. Closed. We verified that lockbox management conducts reviews to ensure that envelopes are properly candled. During our fiscal year 2006 audit, we did not find any instances in which envelopes were not being properly candled at the four lockbox banks that we visited. 03-29 Confirm with FMS that IRS’s requirements for background and fingerprint checks for courier services are met regardless of whether IRS or FMS negotiates the service agreement. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-03-562R, May 30, 2003) Closed. During 2002, FMS issued an amendment to the courier Memorandum of Understanding (MOU), which included the requirement that all courier employees satisfy the basic investigation, including a Federal Bureau of Investigation (FBI) fingerprint and name check. All 10 IRS campuses now have a contact responsible for submitting paperwork to the National Background Investigations Center (NBIC) and ensuring courier employees are granted clearance. During 2003, IRS required NBIC to provide monthly status reports of the campus compliance with this requirement. During fiscal year 2006, all courier MOUs and NBIC reports were received monthly, enabling IRS to identify problems and issues more quickly. Closed. During our fiscal year 2006 audit, we found no instances in which updated courier service contracts did not contain the requirements for background and fingerprint checks. employees’ personal belongings with cash payments and receipts at IRS’s taxpayer assistance centers. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-03-562R, May 30, 2003) Closed. TAC procedures (IRM 21.3.4.7.3.1(2)) prohibit storing of personal items with any taxpayer- related documents. Procedures further prohibit storing taxpayer receipts in the same storage container with employee personal items. Personal items and taxpayer- related documents must not be stored in the same container under the same locking device. Closed. We verified that IRS prohibits its employees from storing personal belongings with any taxpayer-related documents. require lockbox managers to provide satisfactory evidence that managerial reviews are performed in accordance with established guidelines. At a minimum, reviewers should sign and date the reviewed documents and provide any comments that may be appropriate in the event that their reviews identified problems or raised questions. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls and Accounting Procedures (GAO-04- 553R, April 26, 2004) Closed. During fiscal year 2006, IRS established and implemented a new Data Collection Instrument (DCI) review, entitled “Processing-Internal Controls.” During on-site reviews, the following logs are required to be reviewed: desk and work area, date stamp, cash, candling, shred, and mail. The results of these DCI reviews are rolled into a calculation to determine each bank’s score in the new bank performance measurement process. In addition, lockbox personnel are required to perform similar reviews monthly and report results to the lockbox field coordinators (FC). The report must contain the following: date of review, shifts reviewed, results of the review (even when no items are found), and reviewer’s and site manager’s initials and/or signature as required by the Lockbox Processing Guidelines (LPG). To further strengthen this internal control, effective June 1, 2006, additional review of the monthly reports (F9535/Discovered Remittance, candling log, disk checks/audits, and shred) received from the lockbox site was performed by the lockbox FC. Specific check points were added to the “Monthly Reports” DCI that is a part of the procedural DCI performed at the SPC. In addition to confirming the receipt and timeliness of the reports, coordinators reviewed the reports to ensure they were complete per the LPG requirements and that all required management signatures/initials were present to provide satisfactory evidence that the managerial reviews are performed. Open. We verified that IRS established and implemented a processing internal control DCI and scorecard to measure whether managerial reviews are performed at lockbox banks of logs, including desk and work area, date stamp, cash, candling, shred, and mail. However, during our fiscal year 2006 audit, we identified instances at two lockbox banks we visited where lockbox managers or their designees had not documented managerial reviews of courier logs. We will continue to evaluate IRS’s corrective actions during our fiscal year 2007 audit. 04-07 Develop procedures to enhance adherence to existing instructions on safeguarding discovered remittances at service center campuses. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls and Accounting Procedures (GAO-04- 553R, April 26, 2004) Closed. IRM 3.8.46, Discovered Remittances, was issued during 2003 and 10,000 copies were distributed to all campuses. Form 4287 (Record of Discovered Remittances) was revised to enhance adherence to existing instructions by including a check box for managers to indicate the reconciliation was performed. Additionally, SP revised the monthly security checklist to include a review of the discovered remittance procedures. A Discovered remittances job aid was added to IRM 3.8.46. During the monthly security checklist reviews, it was observed that noncompliance generally occurred in functions outside SP. Therefore, SP is committed to conducting quarterly meetings with the noncompliant offices to reinforce discovered remittances procedures. Closed. We verified that the IRM contains a discovered remittances job aid to be used for recording discovered remittances. procedures to ensure that service center campus security guards respond to alarms. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls and Accounting Procedures (GAO-04- 553R, April 26, 2004) Closed. IRM 1.16.12 has been revised and implemented to reflect the testing of alarms and guard response requirements. The Physical Security and Emergency Preparedness Office (PSEP) has developed self-assessment procedures to conduct random testing of guard response to alarms at all campuses and computing centers. Report forms have also been developed to capture test results. The unannounced tests are performed quarterly and guard responses as well as any malfunction of equipment will be documented and followed up for corrective action. The testing ensures that guards respond to alarms expeditiously and that malfunctioning equipment is identified and corrective actions are identified and followed through until the correction is completed. Open. We verified that IRS has taken steps to ensure that SCC security guards respond to alarms, which include revising the IRM to reflect the testing of alarms and guard response requirements and conducting unannounced alarm tests. However, during our fiscal year 2006 audit, we found instances at two of four SCCs we visited where guards did not respond to our tests of door alarms. We will evaluate IRS’s corrective actions during our fiscal year 2007 audit. controls in the event that automated security systems malfunction, such as notifying guards and managers of the malfunction, and immediately deploying guards to better protect the processing center’s perimeter. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls and Accounting Procedures (GAO-04- 553R, April 26, 2004) Closed. Mission Assurance (MA) developed alarm testing procedures which are used to supplement the requirements in IRM 1.16.12. The IRM and supplemental procedures require the notification of local management whenever there is a malfunction of alarms. The procedures also require that guards are deployed or doors are secured, as necessary, either during tests or when otherwise identified. The contract guard force project manager is required to sign off on all unannounced alarm test reports. Test results are maintained by the PSEP office. Closed. We verified that IRS revised language in the IRM that addresses specific compensating actions to be taken in the event of sporadic malfunctioning alarms or an overall system failure. Performance Management System (BPMS) is fully operational, implement procedures to ensure that all performance data reported in the MSP report are subject to effective, documented reviews to provide reasonable assurance that the data are current at interim periods. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls and Accounting Procedures (GAO-04- 553R, April 26, 2004) Closed. IRS has taken steps to ensure that the performance measures data reported in the monthly report are properly reviewed before being published. All divisions now submit most of their performance measures data directly to BPMS. The divisions are required to verify/certify the accuracy of the data before uploading to BPMS. Corporate Performance Budgeting staff implemented additional manual quality control procedures that include reviewing all tables, charts, and line graphs and visually inspecting the numbers and comparing the information to the previous month’s report for consistency. In addition, IRS is working with Treasury to streamline its current set of performance measures. Its purpose is to increase the value of the information provided to stakeholders, focus priorities, and reduce administrative burden. Closed. During our fiscal year 2006 financial audit of IRS, we reviewed IRS’s process for compiling its performance measures, including its BPMS, and reviewed supporting documentation and calculations of two interim performance measures. We did not identify any exceptions in our test of IRS’s performance measures at the interim testing period. 05-03 Research and resolve the current backlog of unresolved unmatched exception reports. (short-term) Opportunities to Improve Timeliness of IRS Lien Releases (GAO-05-26R, Jan. 10, 2005) Closed. All backlogs were resolved the week ending May 12, 2006, and an extract was run to verify that all entries had been resolved. Closed. IRS’s Centralized Case Process/Lien Processing Unit at the Cincinnati campus researched and resolved its backlog of unresolved unmatched exception reports. In February 2007, we observed that there was no backlog of unresolved, unmatched exception reports. Opportunities to Improve Timeliness of IRS Lien Releases (GAO-05-26R, Jan. 10, 2005) Closed. Lien Unit managers ensure that the unmatched exception report is pulled and resolved weekly within 5 business days. As an additional control, a subsequent extract report is produced to identify any potentially unresolved modules in order to ensure all accounts are worked. With the implementation of the September 2006 Automated Lien System (ALS) 8.3 release, the extract will no longer be necessary as the weekly report will be cumulative. Existing inventory is captured weekly on local monitoring reports. Closed. IRS’s Centralized Case Process/Lien Processing Unit is currently researching and resolving unmatched exception reports weekly within 5 business days. In February 2007, we observed that IRS was researching and resolving unmatched exception reports weekly. Opportunities to Improve Timeliness of IRS Lien Releases (GAO-05-26R, Jan. 10, 2005) Closed. This recommendation is closed based on the completion of the backlog as verified by an extract report showing no inventory for restricted interest. Closed. IRS’s Centralized Case Process/Lien Processing Unit completed researching and resolving its backlog of unresolved manual interest or penalties reports. In February 2007, we verified that there was no backlog of unresolved manual interest or penalties reports. exception reports containing liens with manually calculated interest or penalties weekly, as called for in the Internal Revenue Manual and the ALS User Guide. (short- term) Opportunities to Improve Timeliness of IRS Lien Releases (GAO-05-26R, Jan. 10, 2005) Closed. ALS receives a master file data extract listing modules where liabilities have been fully paid. The data extract that is matched against information in the ALS system automatically releases liens when there is a match, including restricted interest and penalty modules. After a review of 300 satisfied modules, IRS identified five cases with additional restricted interest or penalty. The remaining amounts due after computation were for very small amounts, less than $10. On the basis of those reviews, IRS determined these cases should receive systemic releases. Closed. IRS reprogrammed its ALS to automatically release liens once the taxpayer’s account was fully paid, even if it contains a manual interest indicator. Previously, IRS’s IRM required it to review accounts containing a manual interest or penalty indicator, to determine whether the manually recorded interest and penalty amounts were correct and whether it should assess the taxpayer any additional interest or penalty before releasing the lien. IRS has decided not to hold up the lien release on such accounts for a review and has changed its computer programming to automatically release the lien once the account balance reaches zero. We obtained and reviewed a computer extract from February 2007 showing that accounts containing manual interest or penalty are no longer held up from automated lien release. Improve the current unmatched exception report by including a cumulative list of all unmatched taxpayer accounts that have not been resolved to date. (short-term) Opportunities to Improve Timeliness of IRS Lien Releases (GAO-05-26R, Jan. 10, 2005) Closed. Effective July 21, 2006, the Satisfied Module Rejected Report became a cumulative report. Rejected releases are listed and remain on the report until resolved. Closed. We verified that IRS improved the current unmatched exception report by changing it to a cumulative list of all unmatched taxpayer accounts that have not been resolved to date. existing instructions on safeguarding taxpayer receipts and information, such as securing access and candling procedures, at service center campuses selected for significant reductions in their submission processing functions. (short-term) Management Report: Review of Controls over Safeguarding Taxpayer Receipts and Information at the Brookhaven Service Center Campus (GAO-05-319R, Mar. 10, 2005) Closed. IRS has enforced adherence to existing instructions on safeguarding taxpayer receipts and information by including this requirement in the monthly Campus Security Reviews which are also reviewed annually by the National Office Security Review Team at selected sites. Local management continually reinforces these requirements through employee counseling and individual and group meetings with security clerks to ensure procedures for issuance of badges, inventory of badges, and security of taxpayer receipts and information. Meetings have also been held to discuss candling procedures. Local management conducts weekly and monthly reviews to ensure adherence to these procedures. Additional refresher training, alerts, and managerial review were implemented to reinforce compliance with IRM 1.4.16.5.9 requirements for managerial and clerical reviews. Open. We verified that IRS has implemented monthly Campus Security Reviews, local management reviews, and alerts to enforce adherence to existing instructions on safeguarding taxpayer receipts and information by SCCs selected for significant reductions in their submission processing functions. However, during our fiscal year 2006 audit, we found instances at one SCC selected for significant reductions in its submission processing functions where mail potentially containing taxpayer receipts was not secured overnight. We will evaluate IRS’s corrective actions during our fiscal year 2007 audit. methodology for estimating anticipated rapid changes in mail volume at future SCCs selected for significant reductions in their submission processing functions, taking into consideration factors such as the prior rampdown experience at Brookhaven. (short- term) Management Report: Review of Controls over Safeguarding Taxpayer Receipts and Information at the Brookhaven Service Center Campus (GAO-05-319R, Mar. 10, 2005) Open. IRS has drafted a methodology using historical data obtained from the Brookhaven and Memphis campus ramp-down. Pending approval by the Director of Accounts Management for Wage and Investment (W&I), this methodology will be used in future consolidations to ensure that IRS has reliable data to effectively manage resources during and after the consolidation period. Open. We will evaluate IRS’s efforts to develop and document an approved methodology for estimating mail volume for future sites selected for ramp-down during our fiscal year 2007 audit. requirement that appropriate background investigations be completed for contractors before they are granted staff-like access to service centers. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. MA & Security Service (SS) physical security analysts issue identification (ID) media to contractors upon receipt of the following from the Contracting Officer’s Technical Representative (COTR): (1) Non-IRS Identification Card Request, (2) Request for ID Media/Access Card for Contract Employee or similar request form, (3) NBIC clearance letter, and (4) Personal Identity Verification for Federal Employees and Contractors form. These documents are maintained on-site by the MA & SS Physical Security Office where ID media is issued. Open. We verified that IRS issued guidance to enforce its existing background investigation requirement. However, a recent TIGTA report on IRS’s background investigation process indicates that IRS continues to allow contractors to access its facilities and computer systems before favorable background investigations are completed. We will continue to evaluate IRS’s enforcement, oversight, and implementation of its contractor background investigation policies during our fiscal year 2007 audit. 05-14 Require that background investigation results for contractors (or evidence thereof) be on file where necessary, including at contractor worksites and security offices responsible for controlling access to sites containing taxpayer receipts and information. (short- term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. MA & SS physical security analysts issue ID media to contractors upon receipt of the following from the COTR: (1) Non-IRS Identification Card Request, (2) Request for ID Media/Access Card for Contract Employee or similar request form, (3) NBIC clearance letter, and (4) Personal Identity Verification for Federal Employees and Contractors form. These documents are maintained onsite by the MA & SS Physical Security Office where ID media is issued. Open. We were not able to verify that IRS requires the results of background investigation for contractors be maintained at the contractor’s work site. In addition, according to a recent TIGTA report on IRS’s background investigation process, TIGTA was unable to complete its analysis on whether contractor employees were granted access to IRS’s systems before favorable background checks were completed because IRS could not provide the proper documentation verifying that all prescreening tests had been completed. We will continue to evaluate IRS’s corrective actions and implementation of its contractor background investigation policies in our fiscal year 2007 audit. reminder to courier contractors of the need to adhere to all courier service procedures. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. Effective January 1, 2006, the lockbox banks must provide an annual memorandum to the courier contractor reminding them that they must adhere to all of the courier service procedures in the Lockbox Security Guidelines (LSG). For the campuses, Service Center Accounting held a conference on January 31, 2006, with Treasury’s Financial Management Service (FMS), the Federal Reserve Banks, and the servicing Treasury’s General Account (TGA) banks and reinforced all policies and procedures governing the courier process as outlined in IRM 3.8.45. We will continue to reinforce policies and procedures governing the courier. For lockbox banks, the Security Team verified that all lockbox bank sites issued an annual memorandum to courier contractors reminding them to adhere to all courier service procedures in the LSG. Open. IRS’s response does not address written reminders provided to SCC couriers. Also, during our fiscal year 2006 audit we did not observe that notifications to SCC couriers had been made by the end of our fieldwork. We will continue to evaluate IRS’s corrective actions in our fiscal year 2007 audit. contractors entrusted with taxpayer receipts and information off site adhere to IRS procedures. (short- term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. Lockbox banks’ LSG 2.12 requires that while transporting the data from the lockbox facility, the courier vehicle used to transport taxpayer data/remittances must be locked and secured, driven directly to the destination, and must always be under the supervision of the courier. All couriers are required to complete the same National Agency Check and Inquiry with Credit Investigation (NACIC) as bank management officials. For specific transport activities, deposit ticket and deposit transport time frames are reviewed as part of Lockbox Performance Measures. For lockbox banks, a new requirement will become effective in early 2007 that ensures that the lockbox bank sites are receiving dedicated transport service that complies with the requirement of the LSG. Lockbox management shall follow the courier service vehicle while the courier is carrying IRS lockbox deposits. This review shall be conducted unannounced at least once per quarter. This procedure has been included in LSG under 2.15 “Lockbox Bank Courier Service review, Transport of Deposit.” Also, effective January 1, 2007, the Courier’s DCI requires the Security Team to observe a courier run at each lockbox site to ensure dedicated courier service is being provided. For campuses, couriers sign, date, and notate the time of pick up on Form 10160. When the couriers drop off the deposit, Form 10160 is date and time stamped. Each campus reviews the form and notes any time discrepancies. Couriers are questioned if discrepancies are found and the information is notated in the Courier Incident Log. If something out of the ordinary is noted, the centers use their discretion to make a determination whether or not it is necessary to trail the couriers. Open. We verified that IRS revised its LSG to provide for periodic verification that couriers adhere to IRS policy while in transit. However, IRS’s corrective actions occurred subsequent to our fieldwork. We will evaluate IRS’s corrective actions during our fiscal year 2007 audit. require that critical utility or security controls not be located in areas requiring frequent access. (short- term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. MA & SS worked with the Business Operating Divisions (BOD) and Procurement to formulate policy guidelines. The Lockbox Policy Guidelines, dated January 10, 2006, have been revised. LSG 2.2.1 Main Utility Feeds, includes physical protection of all utilities against accidental or intentional disruption of services. Exterior utilities will be physically protected with bollards, fencing, or similar obstruction to prevent destruction. Where critical controls relative to utility feeds and security systems are located in rooms or areas frequented by contract employees, there must be continuous closed-circuit television (CCTV) coverage as well as tamper-proof devices on those controls such as fencing, locks, or other protections. LSG 2.2.2.12 page 18(5) has been revised to state that to prevent unauthorized access to control panels or critical systems, keys must be secured and controlled. Closed. We verified that the LSG requires physical protection of all main utility feeds against accidental or intentional disruption of service. While the LSG does not require that critical utility or security controls not be located in areas requiring frequent access, the LSG does require that frequently accessed areas where utility feeds are present must be continuously monitored with CCTV coverage as well as tamper-proof devices installed on those controls such as fencing, locks, or other protections. Therefore, we believe that IRS’s corrective actions meet the objective of our recommendation. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. Mission Assurance has developed and incorporated a CCTV evaluation matrix into the security review process ensuring that critical areas and assets are monitored. The January 1, 2007, LSG was revised under (CCTV Cameras) LSG 2.2.2.13.1 (6) and it states that Pan, Tilt, Zoom (PTZ) cameras shall be installed in mail sorting, mail delivery, mail extraction, exceptions processing ,and certified mail processing areas to ensure sites have the capability to observe, monitor, and record mail extraction activity and to assist in monitoring. Also, the LSG requires that IRS security controls, equipment, and utilities must be locked to prevent tampering and that keys will be controlled and limited to authorized bank employees. Mission Assurance also included key and combination controls and management as part of its review process at the banks. Closed. We verified that the LSG requires physical protection of all main utility feeds against accidental or intentional disruption of service. While the LSG does not require that critical utility or security controls not be located in areas requiring frequent access, the LSG does require that frequently accessed areas where utility feeds are present must be continuously monitored with CCTV coverage as well as tamper proof devices installed on those controls such as fencing, locks, or other protections. Therefore, we believe that IRS’s corrective actions meet the objective of our recommendation. 05-32 Establish policies and procedures to require appropriate segregation of duties in small business/self-employed units of field offices with respect to preparation of Payment Posting Vouchers, Document Transmittal forms, and transmittal packages. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. The procedures have been established, updated, and approved for IRM 5.1.2. Hard copies were shipped to all applicable employees on September 15, 2006. Open. IRS has taken corrective actions to address this recommendation. However, the corrective actions do not address segregation of duties in SB/SE business units. We will continue to evaluate IRS’s corrective action in our fiscal year 2007 audit. requirement that a document transmittal form listing the enclosed Daily Report of Collection Activity forms be included in transmittal packages, using such methods as more frequent inspections or increased reliance on error reports compiled by the service center teller units receiving the information. (short- term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. Since 2005, W&I Field Assistance has taken a number of actions to emphasize the requirement for including a document transmittal form listing the Daily Report of Collection Activity forms in transmittal packages. These actions include (1) providing remittance training to all TAC managers in 2005 that covered procedures for remittance processing, (2) conducting operational reviews in fiscal year 2006 to ensure TAC adherence to required IRM procedures, (3) identifying best practice ideas, and (4) assessing conformance to current policies and procedures. A review of error reports for fiscal year 2005 and fiscal year 2006 shows a 38 percent decrease in the number of “Other 795/3210” errors. IRS attributes the decrease in errors (from 2,753 to 1,696) to the actions described above and others designed to improve remittance procedures. The SB/SE procedures have been established, updated, and approved in IRM 5.1.2. Hard copies were shipped to all applicable employees on September 15, 2006. Open. During our fiscal year 2006 audit, we identified that a document transmittal was not always included when multiple Daily Report of Collection Activity forms were sent to the aligned service center campus. We will continue to evaluate IRS’s corrective actions during our fiscal year 2007 fieldwork. for SB/SE field office units to track Document Transmittal forms and acknowledgements of receipt of Document Transmittal forms. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. The procedures have been established, updated, and approved for IRM 5.1.2. Hard copies were shipped to all applicable employees on September 15, 2006. Closed. We verified that IRS has established procedures that require SB/SE employees to track document transmittal forms and the acknowledgement of receipt for these forms. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. The procedures have been established and incorporated into the latest revision of IRM 1.4.50. Closed. We verified that IRS has established procedures, which require SB/SE to review the recording, transmittal, and receipt of acknowledgements of the document transmittal forms. prevent the generation or disbursement of refunds associated with accounts with unresolved AUR discrepancies, including placement of a freeze or hold on all such accounts, until the AUR review has been completed. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. The procedures in IRM 3.8.45 were revised on February 1, 2007. A Hot Topic was also issued on January 25, 2007, which will add procedures to IRM 3.17.10 to check for cases that can be identified as an Automated Under Reporter (AUR) payment and research IDRS for CP2000 Indicators: TC 922, “F” Freeze Code, and campus underreporter programs. Open. We verified that the IRM 3.8.45 was revised on February 1, 2007 and the Hot Topic was issued on January 25, 2007. However, the IRM revision and the Hot Topic issuance were subsequent to our field work. We will continue to follow up on IRS’s progress on this issue during our fiscal year 2007 audit. 05-37 Enforce documentation requirements relating to authorizing officials charged with approving manual refunds. (short- term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. Hot Topics were issued on July 26, 2006, and August 28, 2006, reinforcing the requirements for the authorization memorandum. This is also being reviewed as part of the Monthly Security Review Checklist. Open. During our fiscal year 2006 audit, we continued to find that the documentation requirements on memorandums, which are submitted to the manual refund units’ listing officials authorized to approve manual refunds, were incomplete. We verified that IRS (1) issued the Hot Topics, and (2) included the documentation requirements for the authorization memorandum in their Checklist. However, the Hot Topics and Checklist were issued subsequent to our fieldwork. We will continue to follow up on IRS’s efforts to improve the documentation requirements during our fiscal year 2007 financial audit. for monitoring accounts and reviewing monitoring of accounts. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. An alert (#07021) was issued on October 24, 2006 to enforce the manual refund guideline procedures to monitor accounts to prevent duplicate refunds. IRM requirements to make certain refunds are controlled and monitored will be emphasized during yearly training (and quarterly meetings with IRS organizations that initiate manual refunds). Open. During our fiscal year 2006 audit, we continued to find instances where the manual refund initiators did not monitor accounts to prevent duplicate refunds. We also found that the supervisors did not review the initiator’s work to ensure that the monitoring of accounts was performed. We verified that IRS issued Alert No.07021; however, it was issued subsequent to our field work. We will continue to review IRS’s monitoring and review efforts during our fiscal year 2007 financial audit. for documenting monitoring actions and supervisory review. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. An alert (#07021) was issued on October 24, 2006, to enforce the documentation requirements. IRM requirements to make certain refunds are controlled and monitored will be emphasized during yearly training. Likewise, IRS continues to leverage tools such as the Manual Refund Check Sheet and Monthly Security Reviews to ensure compliance with IRM requirements. Open. During our fiscal year 2006 audit, we continued to find instances where the requirements for documenting monitoring actions and documenting supervisory review were not always enforced. We verified that IRS issued Alert No. 07021; however, it was issued subsequent to our field work. We will continue to monitor IRS’s efforts in documenting monitoring actions and supervisory review during our fiscal year 2007 financial audit. requirement that command code profiles be reviewed at least once annually. (short- term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. Submission Processing issued a Hot Topic January 10, 2007. To verify accounting is in compliance with IRM 3.17.79.1.7, the Manual Refund Unit will include a signed and dated copy of the Command Code: RSTRK input (action performed through the use of the Integrated Data Retrieval System (IDRS)) in the file with the authorization memorandums. This documentation will be included in the fiscal year 2007 File. A conference call was held with all of the Accounting Operations on January 25, 2007 to answer any questions related to the Hot Topic. In addition, an item has been added to the Monthly Security Review Checklist that includes a review of this requirement. Open. During our fiscal year 2006 audit, we found one case where a Certifying Officer’s command code profile had not been reviewed in over 12 months. We verified that IRS issued the Hot Topic in January 2007, and modified the Monthly Security Review Checklist reminding centers of the requirement. However, the Hot Topic and modifications were issued subsequent to our fieldwork. We will continue to follow up on IRS’s efforts to improve the review requirements during our fiscal year 2007 financial audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed June 30, 2006. The IDRS Security Law Enforcement Manual (LEM) and IRM have been updated and implemented to reflect employees are not to review their own command code profiles. MA & SS Chief also signed a memorandum advising business units of the requirement to not have reviewing employees in the same IDRS unit as the employees they review. In addition, MA & SS worked with business units to set up separate IDRS units so that unit security representatives and managers can ensure separation of reviewer from reviewed. This latter activity is being tracked to ensure the required separation is put into effect. Open. During our fiscal year 2006 audit, we found that the IRM wording to specify that staff members do not review their own command code profiles had not been updated. We will continue to monitor IRS’s efforts in preventing staff members from reviewing their own command code profiles during our fiscal year 2007 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr. 27, 2005) Closed. ALS receives a master file data extract listing modules where liabilities have been fully paid. The data extract that is matched against information in the ALS system automatically releases liens when there is a match, including restricted interest and penalty modules. After a review of 300 satisfied modules, IRS identified five cases with additional restricted interest or penalty. The remaining amounts due after computation were for very small amounts, less than $10. On the basis of those reviews, IRS determined these cases should receive systemic releases. Closed. IRS reprogrammed its ALS to automatically release liens once the taxpayer’s account was fully paid, even if it contains a manual interest indicator. Previously, IRS’s IRM required it to review accounts containing a manual interest or penalty indicator, to determine whether the manually recorded interest and penalty amounts were correct and whether it should assess the taxpayer any additional interest or penalty before releasing the lien. IRS has decided not to hold up the lien release on such accounts for a review and has changed its computer programming to automatically release the lien once the account balance reaches zero. We obtained and reviewed a computer extract from February 2007 showing that accounts containing manual interest or penalty are no longer held up from automated lien release. Inquiry Unit managers or supervisors document their review of all forms used to record and transmit returned refund checks prior to sending them for final processing. (short-term) Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Open. Form 3210 is the only form used by W&I accounts management (AM). IRM 21.4.3 Returned Refunds/Releases contains procedures for transmitting returned refund checks to the Regional Finance Center utilizing Form 3210. Although the procedures do not require the manager to initial the Form 3210, procedures are in place in the Manager’s IRM to do periodic reviews. AM will explore an effective plan to address this control during fiscal year 2007. It has been determined that adding reminders to the AM Program Letter will not produce the desired result. Procedures for consistent review of Forms 3210 have been drafted for addition to IRM 1.4.16. This will enhance guidelines on periodic reviews. The draft procedures are pending approval through the IRM clearance process. Open. During our fiscal year 2006 audit, we identified instances at two of four SCCs we visited in which Refund Inquiry Unit managers or supervisors did not document their review of all forms used to transmit returned refund checks prior to sending them for final processing. We will continue to evaluate IRS’s corrective actions during our fiscal year 2007 audit. with existing requirements that all IRS units transmitting taxpayer receipts and information from one IRS facility to another, including SCCs, TACs, and units within LMSB and TE/GE, establish a system to track acknowledged copies of document transmittals. (short- term) Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Open. W&I Accounts Management has drafted procedures for the consistent review of receipt transmittals (Form 3210) for inclusion in IRM 1.4.16. Reviews will enforce existing requirements. Newly drafted procedures have been written to provide additional time frames and documentation requirements for Accounts Management employees sending transmittal forms. The draft procedures are pending approval. W&I Field Assistance approved SERP updates on July 14, 2006, to establish a process to monitor acknowledgements of transmittal forms received from the service centers. IRMs 21.3.4.7 and 1.4.11.19.1 were revised to provide procedures for requiring TACs to follow-up with Submission Processing Centers (SPC) when acknowledgements are not received within 10 days. The acknowledgement copies of transmittals received from SPCs must be documented with the date received in the TAC. When missing acknowledgement copies are identified, the TAC employee must document follow-up actions to resolve the missing acknowledgements. The documentation must either be recorded on or attached to the group copy of the transmittals. IRM 1.4.11.19.5 was revised to require TAC group managers to prepare weekly payment processing and reconciliation reviews and to provide documentation feedback to employees. LMSB has issued procedures to the field on the responsibilities for using receipt transmittals. TE/GE addressed this issue during the fiscal year 2006 Annual Assurance Review by responding to question 6.2, “Checks received from taxpayers are sent to the Service Center within one business day via Form 3210 (Document Transmittal), and the Service Center is contacted if the acknowledgement copy of Form 3210 is not received from the Service Center within 10 days.” All managers responded “yes.” Open. During our fiscal year 2006 audit, we identified multiple instances at two TACs (and at one SCC we visited) where IRS employees did not follow existing requirements when transmitting taxpayer receipts and information. In addition, during our subsequent review of TAC corrective actions in this area, we found that the reviews required by the TAC managers were not always performed as intended. We will continue to evaluate IRS’s corrective actions during our fiscal year 2007audit. document the follow-up procedures performed in those cases where transmittals have not been timely acknowledged. (short- term) Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Open. W&I Accounts Management has drafted procedures for the consistent review of receipt transmittals (Form 3210) for inclusion in IRM 1.4.16. Reviews will enforce existing requirements. Newly drafted procedures have been written to provide additional time frames and documentation requirements for Accounts Management employees sending transmittal forms. The draft procedures are pending approval. W&I Field Assistance approved SERP updates on July 14, 2006, to establish a process to monitor acknowledgements of transmittal forms received from the service centers. IRMs 21.3.4.7 and 1.4.11.19.1 were revised to provide procedures for requiring TACs to follow-up SPCs when acknowledgements are not received within 10 days. The acknowledgement copies of transmittals received from SPCs must be documented with the date received in the TAC. When missing acknowledgement copies are identified, the TAC employee must document follow-up actions to resolve the missing acknowledgements. The documentation must either be recorded on or attached to the group copy of the transmittals. IRM 1.4.11.19.5 was revised to require TAC group managers to prepare weekly payment processing and reconciliation reviews and to provide documentation feedback to employees. LMSB has issued procedures to the field on the responsibilities for using receipt transmittals. TE/GE addressed this issue during the fiscal year 2006 Annual Assurance Review by responding to question 6.2, “Checks received from taxpayers are sent to the Service Center within one business day via Form 3210 (Document Transmittal), and the Service Center is contacted if the acknowledgement copy of Form 3210 is not received from the Service Center within 10 days.” All managers responded “yes.” Open. This recommendation affects TAC, LMSB, and TE/GE business units. We were only able to verify that for TACs, IRS has issued guidance for employees to document the follow-up procedures in those cases where transmittals have not been timely acknowledged. However, during our subsequent review of TAC corrective actions in this area, we found that the reviews required in the July 2006 IRM update were not always performed as intended. We will continue to evaluate IRS’s corrective actions in our fiscal year 2007 audit. or supervisors document their reviews of document transmittals to ensure that taxpayer receipts and/or taxpayer information mailed between IRS locations are tracked according to guidelines. (short- term) Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Open. W&I Accounts Management has drafted procedures for the consistent review of receipt transmittals (Form 3210) for inclusion in IRM 1.4.16. Reviews will enforce existing requirements. Newly drafted procedures have been written to provide additional time frames and documentation requirements for Accounts Management employees sending transmittal forms. The draft procedures are pending approval. W&I Field Assistance approved SERP updates on July 14, 2006, to establish a process to monitor acknowledgement of transmittal forms received from the service centers. IRMs 21.3.4.7 and 1.4.11.19.1 were revised to provide procedures for requiring TACs to follow-up with SPCs when acknowledgements are not received within 10 days. The acknowledgement copies of transmittals received from SPCs must be documented with the date received in the TAC. When missing acknowledgement copies are identified, the TAC employee must document follow-up actions to resolve the missing acknowledgements. The documentation must either be recorded on or attached to the group copy of the transmittals. IRM 1.4.11.19.5 was revised to require TAC group managers to prepare weekly payment processing and reconciliation reviews and to provide documentation feedback to employees. LMSB has issued procedures to the field on the responsibilities for using receipt transmittals. TE/GE addressed this issue during the fiscal year 2006 Annual Assurance Review by responding to question 6.2, “Checks received from taxpayers are sent to the Service Center within one business day via Form 3210 (Document Transmittal), and the Service Center is contacted if the acknowledgement copy of Form 3210 is not received from the Service Center within 10 days.” All managers responded “yes.” Open. This recommendation affects SCC, TAC, LMSB, and TE/GE business units. During our fiscal year 2006 audit, we continued to find instances where managers/designees did not document their reviews of document transmittals to ensure that taxpayer receipts and/or taxpayer information mailed between IRS locations were tracked according to guidelines. In addition, during our subsequent review of TAC corrective actions, we found that the reviews required in the July 2006 IRM update were not always performed as intended. We will continue to evaluate IRS’s corrective actions during our fiscal year 2007audit. adequate physical security controls to deter and prevent unauthorized access to restricted areas or office space occupied by other IRS units, including those TACs that are not scheduled to be reconfigured to the “new TAC” model in the near future. This includes appropriately separating customer service waiting areas from restricted areas in the near future by physical barriers such as locked doors marked with signs barring entrance by unescorted customers. (short-term) Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Open. FA surveyed each area office and provided this information to AWSS and MA & SS. Open. During our fiscal year 2006 audit, IRS continued to develop guidelines to address unauthorized access to restricted areas. These corrective actions were not complete at the conclusion of our fiscal year 2006 audit. We will continue to evaluate IRS’s corrective actions during our fiscal year 2007 audit. to a central monitoring station or local police department or institute appropriate compensating controls when these alarm systems are not operable or in place. (short-term) Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. MA & SS has coordinated with AWSS/Real Estate and Facilities Management (REFM) and W&I to ensure duress alarms for 387 (97 percent) of the 400 TAC offices are currently connected to a central monitoring station and will work in partnership to address reported deficiencies. MA & SS and FA have determined that testing of duress alarms will be not less than once each calendar quarter (3 months) and results will be reviewed and documented. Plans to connect the remaining 13 offices are in progress and are being tracked until complete (status report provided as supporting documentation). Each of these 13 offices are currently equipped with duress alarms that annunciate locally and compensating control procedures are in place to ensure 911 is contacted for emergency assistance. IRM 1.16.12 has been revised to set forth alarm testing procedures and to ensure TAC personnel are aware to contact 911 when alarms are not operable or in place. Closed. During fiscal year 2006, we verified that IRS revised its policy, which outlines duress alarm testing requirements at TACs as well as guidelines for employees to follow when working with duress alarms, particularly at TACs with nonoperable duress alarms. visits by offsite managers to TACS not having a manager permanently on-site. This documentation should be signed by the manager and should (1) record the time and date of the visit, (2) identify the manager performing the visit, (3) indicate the tasks performed during the visit, (4) note any problems identified, and (5) describe corrective actions planned. (short-term) Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. The checklist for managers to use to document visits to outlying TACs was included as an exhibit in the April 27, 2006 update of IRM 1.4.11.6.2. Open. We verified during our fiscal year 2006 audit that the IRM had been updated to include a method for managers to document their visits to remote TACs. However, during our subsequent review of the reports prepared by TAC managers, we were unable to determine the scope and content of what was observed or accomplished during the visit. We will continue to evaluate IRS’s implementation of its corrective actions during our fiscal year 2007audit. requirement that all security or other responsible personnel at SCCs and lockbox banks record all instances involving the activation of intrusion alarms regardless of the circumstances that may have caused the activation. (short-term) Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. In January 2006, LSG 2.2.3.1.5 (6) was revised to add the requirement that the banks maintain a logbook of incident reports and any applicable supporting documentation, noting corrective follow-up actions taken on each incident. The logbook must be maintained in sequential date order. This was reinforced in the 2007 LSG. It states “The bank must maintain a logbook of incident reports and any applicable supporting documentation.” Corrective follow-up actions must be documented and included with the original incident report. The logbook must be maintained in sequential date order. A review of the incident reports and associated logbooks has shown that although not specifically directed at intrusion alarms, the logbook of incident reports is also used to record all alarm events. LSG section 2.2.2.14 Intrusion Detection System (IDS), paragraph (7) will be revised to add “A record of all instances involving the activation of intrusion alarms, regardless of the circumstances that may have caused the activation, must be maintained in the Daily Activity Report/Log or other incident logbook.”‘ At the SCCs, field security analysts have been advised to reiterate to the campus guard force that all activation of intrusion alarms whether during tests (by staff or oversight auditors), inadvertently, or by actual security breach violations must be recorded/documented. Existing unannounced alarm testing procedures and the associated Alarm Test Report form have been modified to incorporate a review of the guard console timeline log to test guard adherence to this requirement. Recording of all alarm activations has been added to the Physical Security Audit Management Checklist, reviewed by field management. Open. During our fiscal year 2006 audit, we identified instances at two of four lockbox banks we visited in which the activation of intrusion alarms were not recorded by security guards. We will evaluate IRS’s planned corrective actions during our fiscal year 2007 audit. Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Open. W&I has issued a memorandum to MA & SS to address this issue. Open. IRS’s actions to address this issue are currently in process. We will evaluate IRS’s corrective actions in our fiscal year 2007 audit. bank’s security review checklist to ensure that it encompasses reviewing security incident reports to validate whether security personnel are providing corrective actions related to the incidents cited. (short- term) Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. The Security Review Checklist was updated June 5, 2006, and all follow-up actions have been completed by the Lockbox Security Team. SP worked with IRS Mission Assurance and FMS to ensure the physical security review checklist was updated to include reviews of the security incident reports and to validate that security personnel are providing corrective actions related to the incidents that were cited. Closed. We verified that the lockbox bank physical security DCI had been updated to include a review to ensure that security incidents are documented. nature of its periodic reviews of candling processes at SCCs to ensure they (1) encompass tests of whether envelopes are properly candled through observation of candling in process and inquiry of employees who perform initial and final candling and (2) document the nature and scope of the test and observation results. (short-term) Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. The Security Review Checklist has been revised to document, through observation, the effectiveness of the initial and final candling process. Employee inquiry continues to be a part of the Monthly Campus and National Office Security Reviews. Open. We verified that IRS revised its Security Review Checklist to document, through observation, the effectiveness of the initial and final candling process. IRS states that employee inquiry continues to be part of the monthly campus and national office security reviews. However, IRS did not provide documentation demonstrating (1) that inquiries were made of employees who perform initial and final candling, (e.g., evaluate employees awareness of candling procedures) and (2) the nature and scope of the tests conducted (e.g., number of employees and a brief description of the extent of the candling observation). We will continue to monitor IRS’s corrective actions during our fiscal year 2007 audit. policies and procedures at lockbox banks to ensure that all remittances of $50,000 or more are processed immediately and deposited at the first available opportunity. (short-term) Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. On April 13, 2006, Lockbox Electronic Bulletin (LEB) 200613 (Remittances of $50,000 or more) was distributed throughout the Lockbox Network. The LEB updated LPG 3.2(4) and LPG 3.2.7.1(1) to state the following: “If 50,000 or more is discovered in any type of work, it should be expedited and deposited on the first available deposit.” In addition, lockbox management must ensure that remittances of $50,000 or more are not left unattended, including at disruptive times such as shift changes, breaks, meetings, etc. These remittances must be collected and then batched for expedited processing. Additionally, management will continue to provide training reminders and actively monitor the work in process for compliance with high-dollar procedures. Closed. We verified that the LPG requires remittances of $50,000 or greater are not to be left unattended, including at disruptive times such as shift changes, breaks, meetings, and are to be expedited and deposited on the first available deposit. Also, we verified that a review checkpoint was added to the Processing Internal Controls DCI for lockbox banks to ensure that remittances of $50,000 or greater are processed expeditiously and are not to be left unattended, including at disruptive times such as shift changes, breaks, and meetings. During our fiscal year 2006 audit, we found no instances of remittances of $50,000 or more that were not processed immediately or deposited at the first available opportunity. Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. A review checkpoint was added to the Processing Internal Controls DCI, which was implemented during the April 2006 on-site review performed by the lockbox field coordinators (LFC). The review requires the LFC to ensure that there is an internal control in place to expedite remittances of $50,000 and over; and that lockbox management is ensuring these remittances are collected from all areas at the end of each shift and prior to breaks, then batched and sent for processing. Closed. We verified that a review checkpoint was added to the Processing Internal Controls DCI for lockbox banks to ensure that remittances of $50,000 or greater are processed expeditiously and are not to be left unattended, including at disruptive times such as shift changes, breaks, and meetings. During our fiscal year 2006 audit, we found no instances of remittances of $50,000 or more that were not processed immediately or deposited at the first available opportunity. Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed: The physical security of submission processing centers is a key priority for IRS. IRS has many physical security controls protecting the perimeter of facilities and access to buildings. Protections include fencing, CCTV, perimeter entrances protected with electronic gates, and security guards. Building access is protected with electronic access controls (key cards), portals, and security guards. We continually monitor the physical security of our submission processing facilities and conduct various reviews to continually assess our security posture. Our continuous monitoring includes (1) comprehensive risk assessments conducted every 2 years, (2) physical security compliance reviews conducted every 2 years, and (3) an Audit Management Checklist process that is conducted quarterly. The quarterly Audit Management Checklist process includes specific evaluations of the effectiveness of controls intended to ensure that only individuals with proper credentials are permitted access to submission processing centers and the review of the integrity of perimeter security. Additionally, for lockbox banks, on January 1, 2007, IRS revised LSG, Section 2.2.3.1(6)k, to restrict access of all delivery personnel. The IRS Lockbox Security Review Team observed the Lockbox Site’s process of delivery personnel while on site to ensure compliance with the LSG requirement. In addition, section 2.2.2.13.1 (CCTV Cameras) (2)g of the LSG was revised to add that cameras must capture images of all persons entering and exiting perimeter doors and other critical ingress/egress points to include but not limited to the computer room and closets containing main utility feeds. Open. We verified that (1) the Audit Management Checklist verifies that guards check photo identification of all visitors before permitting access to SCCs and ensures that CCTV surveillance systems at SCCs provide complete and unobstructed exterior coverage of the entire fenceline and perimeter of the facility and (2) the LSG was revised to restrict access of all delivery personnel at lockbox banks. However, during our fiscal year 2006 audit, we continued to find weaknesses in controls over access to the facility and/or surrounding perimeter at three SCCs and one lockbox bank we visited. We found instances of gaps in security fences at two SCCs, overgrown shrubbery that obstructed the view of security personnel at one SCC, courier company personnel delivering the lockbox accounting package that were not always listed on the hard copy access list at the entry gate at one SCC, and a courier face and/or badge that was not recognizable through a camera prior to granting courier access to the loading dock area at one lockbox bank. The corrective actions cited by IRS were subsequent to our fiscal year 2006 audit. We will evaluate the effectiveness of these actions during our fiscal year 2007 audit. security procedures in the Internal Revenue Manual (IRM) to require that all SCCs and any respective annex facilities processing taxpayer receipts and/or information perform and document monthly tests of the facility’s intrusion detection alarms. At a minimum, these procedures should (1) outline the type of test to be conducted, (2) include criteria for assessing whether the controls used to respond to the alarm were effective, and (3) require that a logbook be maintained to document the test dates, results, and response information. (short-term) Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Open. MA & SS and Agency-Wide Shared Services (AWSS) will update the IRMs and LPG related to the SCCs alarm testing procedures to include a description of the types of tests to be conducted, criteria for assessing controls, and the logging requirements by August 2007. Open. We will continue to evaluate IRS’s corrective actions during our fiscal year 2007 audit. require that a completed form 13094 with a positive recommendation be provided for every juvenile hired to any position that will allow access to taxpayer receipts and/or taxpayer information. (short-term) Management Report: Improvements needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. Policy was amended to require that all juveniles being considered for employment with IRS complete Form 13094 (Recommendation for Juvenile Employment with IRS) with a positive recommendation. This requirement is mandatory for employment with IRS. Closed. During our fiscal year 2006 audit, we verified that IRS amended its juvenile hiring policy to ensure that only those juveniles receiving positive recommendations will be permitted access to taxpayer receipt and information. to verify the information on the form 13094 by contacting the reference directly. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. Policy was amended to establish procedures that require IRS personnel to verify all completed forms with a positive recommendation by contacting the reference directly. Closed. During our fiscal year 2006 audit, we verified that IRS amended its juvenile hiring policy to ensure that IRS personnel verify the information provided on Form 13094 via direct contact with the reference. 06-18 Revise the form 13094 to require the reference to describe his/her relationship with the juvenile including extent of first-hand contact, to allow IRS to review the forms and assess whether the referencer has sufficient basis to recommend that juvenile to a position of trust. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. Form 13094 was modified to include two additional boxes for the reference to include their relationship to the juvenile and the number of years they have known the juvenile. Form 13094 has a revision date of August 20, 2006 and is available on the IRS’ publication Web site. Closed. During our fiscal year 2006 audit, we verified that IRS amended its juvenile hiring policy to require that the references indicate how well they know the potential juvenile hire. In addition, Form 13094 was also revised to request the references provide this information via a check-box system. for hiring juveniles who do not have a current teacher, principal, counselor, employer or former employer, and clarify that IRS’s current policies and procedures should not be interpreted to mean that such juveniles should be allowed access to taxpayer receipts and information without a form 13094 or its equivalent. These procedures could include a list of acceptable alternatives that may serve as references for juveniles who do not have a current teacher, principal or guidance counselor. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. Form 13094 was modified with the following sentences added, “Form should be completed by a person who has personal knowledge of the applicant’s character and trustworthiness. If the applicant is attending school or has graduated, this form must be completed and signed by the current or former school official (i.e., principal, guidance counselor, or teacher). If the applicant is not in school and is currently employed or unemployed, the form must be completed and signed by either a current or former employer.” Form 13094 has a revision date of August 20, 2006 and is available on the IRS’ publication Web site. Closed. During our fiscal year 2006 audit, we verified that IRS amended its juvenile hiring policy to provide accepted alternative references if the juvenile does not have a current teacher, principal, or guidance counselor. The revised Form 13094 has also been revised to include this information. accounting treatment of expense and P&E transactions and reliable financial reporting, enforce existing property and equipment capitalization policy to ensure that it is properly implemented to fully achieve management’s objectives, including recognizing assets when its capitalization criteria is met and recognizing expenses when it is not. (short- term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. IRS implemented a dollar threshold for the ongoing monthly review of P&E transactions beginning in March 2006. In addition, the CFO and Chief, Agency-Wide Shared Services, jointly issued a memorandum to all executives entitled, “Internal Transaction Control and Accuracy Improvement,” emphasizing responsibility for accurate transaction coding, in April 2006. Also, the CFO and procurement offices jointly completed a review of material code descriptions and implemented appropriate changes in the requisition tracking system and IFS; implemented a process to review the material group assigned to transactions at the point of requisition to drive the transaction coding as either P&E or expense; and initiated a feedback process regarding material group coding errors found after receipt and acceptance. Closed. On the basis of our fiscal year 2006 testing of P&E and nonpayroll expenses, we confirm that IRS has improved the accuracy and reliability of its P&E records by enhancing accounting code definitions in its new financial management system to make it easier for users to select the proper accounting codes for recording transactions, improving coordination among units involved in processing P&E activity, and streamlining its analysis of P&E transactions most susceptible to misclassification. 06-21 Generate aging reports when an asset remains in pending disposal status for longer than a specified period of time. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. IRS has initiated a reengineering effort focused on the entire asset retirement and disposal process. As such, reports are currently available to monitor aging transactions during the disposal life cycle. Additionally, procedures are in place that require reviews of aging reports for the timely recording of disposal transactions. Substantial software modifications were designed to improve the recording of information by replacing manual data entry methods by using electronic forms, signatures, and processes. In August 2006, these modifications and review procedures were implemented to streamline the recording of asset disposal activity as required by IRS policy. Open. During fiscal year 2006, IRS reengineered the P&E asset retirement and disposal process. The new process was intended to generate exception reports that would enable management to monitor the aging of transactions during the disposal process. Since this reengineering was still in process during our fiscal year 2006 P&E testing, we will test the new process during our fiscal year 2007audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. AWSS and the Chief Information Officer property managers reengineered the entire asset retirement and disposal process to mitigate issues raised in GAO’s fiscal year 2005 Financial Statement Audit. As such, reports are regularly available on a weekly basis for management to monitor the status of aging transaction dates until the disposal process is complete. Also, review procedures were streamlined to ensure the timely recording of disposal transactions. In August 2006, reengineered process modifications and review procedures were implemented and guidance for conducting reviews was issued. Open. During fiscal year 2006, IRS reengineered the P&E asset retirement and disposal process. The new process was intended to generate exception reports that would enable management to monitor the aging of transactions during the disposal process. Since this re-engineering was still in process during our fiscal year 2006 P&E testing, we will test the new process during our fiscal year 2007 audit. policy requiring that all lockbox banks encrypt backup media containing federal taxpayer information. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. 07-02 Ensure that lockbox banks store backup media containing federal taxpayer information at an off- site location as required by the 2006 LSG. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. appropriate corrective actions for any gaps in closed circuit TV (CCTV) camera coverage that do not provide an unobstructed view of the entire exterior of the SCC’s perimeter, such as adding or repositioning existing CCTV cameras or removing obstructions. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. quarterly physical security reviews to require analysts to (1) document any issues identified as well as planned implementation dates of corrective actions to be taken and (2) track the status of corrective actions identified during the quarterly assessments to ensure they are promptly implemented. (short- term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. contained in the Manual Refund Desk Reference to reflect the IRM requirements for manual refund initiators to (1) monitor the manual refund accounts in order to prevent duplicate refunds, and (2) document their monitoring actions. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. 07-07 Provide to all IRS units responsible for processing manual refunds the same and most current version of the Manual Refund Desk Reference. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. or supervisors provide the manual refund initiators in their units with training on the most current requirements to help ensure that they fulfill their responsibilities to monitor manual refunds and document their monitoring actions to prevent the issuance of duplicate refunds. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. program to check for outstanding tax liabilities associated with both the primary and secondary Social Security Numbers shown on a joint tax return and apply credits to those balances before issuing any refund. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. Instruct Revenue Officers making the TFRP assessments to research whether the responsible officers are filing jointly with their spouses and to place a refund freeze on the joint account until the computer programming change can be completed. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. calculation programs in the master file so that penalties are calculated in accordance with the applicable IRC and implementing IRM guidance. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. taxpayer accounts that may have been affected by the programming errors to determine whether they contain overassessed penalties and correct the accounts as needed. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. and specify in the IRM that at the time of receipt, employees recording taxpayer payments should (1) determine if the payment is more than sufficient to cover the tax liability of the tax period specified on the payment or earliest outstanding tax period, (2) perform additional research to resolve any outstanding issues on the account, (3) determine whether the taxpayer has outstanding balances in other tax periods, and (4) apply available credits to satisfy the outstanding balances in other tax periods. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. and specify in the IRM that employees review taxpayer accounts with freeze codes that contain credits weekly to (1) research and resolve any outstanding issues on the account, (2) determine whether the taxpayer has outstanding balances in other tax periods, and (3) apply available credits to satisfy the outstanding balances in other tax periods. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. Issue a memorandum to employees in the Centralized Insolvency Office reiterating the IRM requirement to timely record bankruptcy discharge information onto taxpayer accounts in the master file or to manually release the liens in ALS. (short- term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. Issue a memorandum to employees in the Centralized Lien Processing Unit reiterating the IRM requirement to date stamp and maintain the billing support voucher as evidence of timely processing by IRS. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. recorded installment agreement user fees as necessary to correctly reflect the user fees IRS earned and collected from taxpayers. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. sufficient secured storage space to properly secure and safeguard its property and equipment inventory, including in- stock inventories assets from incoming shipments, and assets that are in the process of being excessed and/or shipped out. (short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. procedures to require that separate individuals place orders with vendors and perform receipt and acceptance functions when the orders are delivered.(short-term) Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. 07-22 Document the results of internal control tests conducted in a manner sufficiently clear and complete to explain how control procedures were tested, what results were achieved, and how conclusions were derived from those results, without reliance on supplementary oral explanation. (short- term) Management Report: IRS’s First Year Implementation of the Requirements of the Office of Management and Budget’s (OMB) Revised Circular No. A-123 (GAO-07- 692R, May 18, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. it considered existing reviews and audits in determining the nature, scope, and timing of procedures it planned to conduct under its A- 123 process. (short- term) Management Report: IRS’s First Year Implementation of the Requirements of the Office of Management and Budget’s (OMB) Revised Circular No. A-123 (GAO-07- 692R, May 18, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. intends to use the information security work conducted under Federal Information Security Management Act of 2002 (FISMA) to meet related A-123 requirements, identify the areas where the work conducted under FISMA does not meet the requirements of OMB Circular No. A- 123 and, considering the findings and recommendations of our work on IRS’s information security, expand FISMA procedures or perform additional procedures as part of the A-123 reviews to augment FISMA work. (short- term) Management Report: IRS’s First Year Implementation of the Requirements of the Office of Management and Budget’s (OMB) Revised Circular No. A-123 (GAO-07- 692R, May 18, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. 07-25 Revise test plans to include appropriate consideration of the design of internal controls in addition to implementation of controls over individual transactions. (short- term) Management Report: IRS’s First Year Implementation of the Requirements of the Office of Management and Budget’s (OMB) Revised Circular No. A-123 (GAO-07- 692R, May 18, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. identify laws and regulations that are significant to financial reporting, test controls over compliance with those laws and regulations, and evaluate and report on the results of such control reviews. (short- term) Management Report: IRS’s First Year Implementation of the Requirements of the Office of Management and Budget’s (OMB) Revised Circular No. A-123 (GAO-07- 692R, May 18, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. appropriate A-123 follow-up procedures for the last three months of the fiscal year to be implemented once the material weaknesses identified through the annual financial statement audits have been resolved. (short-term) Management Report: IRS’s First Year Implementation of the Requirements of the Office of Management and Budget’s (OMB) Revised Circular No. A-123 (GAO-07- 692R, May 18, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. staff appropriate training, such as that available for financial auditors, to enhance their skills in workpaper documentation, identification and testing of internal controls, and evaluation and documentation of results. (short-term) Management Report: IRS’s First Year Implementation of the Requirements of the Office of Management and Budget’s (OMB) Revised Circular No. A-123 (GAO-07- 692R, May 18, 2007) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will review IRS’s corrective actions during future audits. The following individuals made major contributions to this report: Gloria Cano, Stephanie Chen, William J. Cordrey, Nina Crocker, John Davis, Charles Ego, Charles Fox, John Gates, Ted Hu, Jerrod O’Nelio, John Sawyer, Angel Sharma, Peggy Smith, Cynthia Teddleton, LaDonna Towler, Truc Tuck, and Gary Wiggins.
|
In its role as the nation's tax collector, the Internal Revenue Service (IRS) has a demanding responsibility in annually collecting over $2 trillion in taxes, processing hundreds of millions of tax and information returns, and enforcing the nation's tax laws. Since its first audit of IRS's financial statements in fiscal year 1992, GAO has identified a number of weaknesses in IRS's financial management operations. In related reports, GAO has recommended corrective action to address those weaknesses. Each year, as part of the annual audit of IRS's financial statements, GAO not only makes recommendations to address any new weaknesses identified but also follows up on the status of weaknesses GAO identified in previous years' audits. The purpose of this report is to (1) assist IRS management in tracking the status of audit recommendations and actions needed to fully address them and (2) demonstrate how the recommendations relate to control activities central to IRS's mission and goals. GAO is making no new recommendations in this report. IRS has made significant progress in improving its internal controls and financial management since its first financial statement audit in 1992, as evidenced by 7 consecutive years of clean audit opinions on its financial statements, the resolution of several material internal control weaknesses, and the closing of over 200 financial management recommendations. This progress has been the result of hard work and commitment at the top levels of the agency. However, IRS still faces financial management challenges. At the beginning of GAO's audit of IRS's fiscal year 2006 financial statements, 72 financial management-related recommendations from prior audits remained open because IRS had not fully addressed the issues that gave rise to them. During the fiscal year 2006 financial audit, IRS took actions that enabled GAO to close 25 of those recommendations. At the same time, GAO identified additional internal control issues resulting in 28 new recommendations. In total, 75 recommendations currently remain open. To assist IRS in evaluating and improving internal controls, GAO categorized the 75 open recommendations by various internal control activities which, in turn, were grouped into three broad control activity groupings. The continued existence of internal control weaknesses that gave rise to these recommendations represents a serious obstacle that IRS needs to overcome. Effective implementation of GAO's recommendations can greatly assist IRS in improving its internal controls and achieving sound financial management and can help enable it to more effectively carry out its tax administration responsibilities. IRS acknowledged the status of GAO's recommendations and indicated its desire to ensure that its corrective actions appropriately address its internal control issues.
|
During the 1970s, the poor accounting practices of state and local governments put into question the security of federal funds provided to those governments. The 1975 New York City financial crisis focused increased attention on this problem. It was found that New York City consistently overestimated its revenues, underestimated its expenses, never knew how much cash it had on hand, and borrowed repeatedly to finance its deficit spending. Compounding the poor accountability practices prevalent at that time, for the most part, state and local governments were not receiving independent financial statement audits. In the early 1980s, the Congress became increasingly concerned about a basic lack of accountability for federal assistance provided to state and local governments. The assistance grew from 132 programs costing $7 billion in 1960 to over 500 programs costing nearly $95 billion by 1981. In 1984, when the Single Audit Act was signed into law, federal assistance to state and local governments had risen to $97 billion, more than doubling what it was a decade before. Before passage of the act, the federal government relied on audits of individual grants to help gain assurance that state and local governments and nonprofit organizations were properly spending federal assistance. These audits focused on whether the transactions of specific grants complied with their program requirements. The audits usually did not address financial controls and were, therefore, unlikely to find systemic problems with an entity’s management of its funds. Further, grant audits were conducted on a haphazard schedule, which resulted in large portions of federal funds being unaudited each year. The auditors conducting grant audits did not coordinate their work with the auditors of other programs. As a result, some entities were subject to numerous grant audits each year while others were not audited for long periods. As a solution, the concept of the single audit was created to replace multiple grant audits with one audit of an entity as a whole. Rather than being a detailed review of individual grants or programs, the single audit is an organizationwide audit that focuses on accounting and administrative controls. The single audit was meant to advise federal oversight officials and program managers on whether an entity’s financial statements are fairly presented and to provide reasonable assurance that federal assistance programs are managed in accordance with applicable laws and regulations. At the time the Single Audit Act was enacted, it received strong bi-partisan support in the Congress and from state and local governments. The objectives of the Single Audit Act are to improve the financial management of state and local governments receiving federal financial assistance; establish uniform requirements for audits of federal financial assistance provided to state and local governments; promote the efficient and effective use of audit resources; and ensure that federal departments and agencies, to the extent practicable, rely upon and use audit work done pursuant to the act. The act requires each state and local entity that receives $100,000 or more in federal financial assistance (either directly from a federal agency or indirectly through another state or local entity) in any fiscal year to undergo a comprehensive, single audit of its financial operations. The audit must be conducted by an independent auditor on an annual basis, except under specific circumstances where a biennial audit is allowed.The act also requires entities receiving between $25,000 and $100,000 in federal financial assistance to have either a single audit or a financial audit required by the programs that provided the federal funds. Further, where state and local entities provide $25,000 or more in federal financial assistance to other organizations (“subrecipients” of federal funds) they are required by the act to monitor those subrecipients’ use of the funds. This monitoring can consist of reviewing the results of each subrecipient’s audit and ensuring that corrective action is taken on instances of material noncompliance with applicable laws and regulations. Over the past 12 years, single audits have clearly proved their worth as important accountability tools over the hundreds of billions of dollars that the federal government provides to state and local governments and nonprofit organizations each year. As discussed in our June 1994 report, the Single Audit Act has encouraged recipients of federal funds to review and revise their financial management practices. This has resulted in the state and local governments institutionalizing fundamental reforms, such as (1) preparing annual financial statements in accordance with generally accepted accounting principles, (2) obtaining annual independent comprehensive audits, (3) strengthening internal controls over federal funds and compliance with laws and regulations, (4) installing new accounting systems or enhancing old ones, (5) implementing subrecipient monitoring systems that have greatly improved oversight of entities to whom they have distributed federal funds, (6) improving systems for tracking federal funds, and (7) resolving audit findings. The single audit process has proven to be an effective way of promoting accountability over federal assistance because it provides a structured approach to achieve audit coverage over the thousands of state and local governments and nonprofit organizations that receive federal assistance. Moreover, particularly in the case of block grants—where the federal financial role diminishes and management and outcomes of federal assistance programs depend heavily on the overall state or local government controls—the single audit process provides accountability by focusing the auditor on the controls affecting the integrated federal and state funding streams. At the same time, areas of improvement in the single audit process have been identified through the thousands of single audits conducted annually and a consensus has been developed on the needed solutions. I would now like to highlight these areas and strongly support the proposed amendments you are considering which would strengthen the single audit process. Last December we testified before the Senate Governmental Affairs Committee in support of changing the Single Audit Act. Those changes are reflected in S.1579, the Single Audit Act Amendments of 1996—a bill which is identical to the amendments you are now considering. Today, I will focus on the two main areas of improvement: ensuring adequate coverage of federal funds without placing an undue administrative burden on entities receiving smaller amounts of federal funds; and making single audits more useful to the federal government. The criteria for determining which entities are to be audited is based solely on dollar amounts, which have not changed since the Act’s passage in 1984. The initial dollar thresholds were designed to ensure adequate audit coverage of federal funds without placing an undue administrative burden on entities receiving smaller amounts of federal assistance. In 1984, the dollar threshold criteria for entities ensured audit coverage for 95 percent of all direct federal assistance to local governments. Today, the same criteria cover 99 percent of all federal assistance to local governments. As a result, some local governments that receive comparatively small amounts of federal assistance are required to have financial audits. If the thresholds were raised, as is proposed in the amendments, audit coverage of 95 percent of federal funds to local governments could be maintained while roughly 4,000 local governments that now have single audits would be exempt in the future. More than 80 percent of the federal program managers we interviewed in preparing our 1994 report favored raising the thresholds to at least the levels proposed in the amendments. We strongly support the proposed change and believe it strikes the proper balance between cost-effective accountability and risk. Entities that fall below the audit threshold would still be required to maintain and provide access to records of the use of federal assistance. Also, those entities would continue to be subject to monitoring activities which could be accomplished through site visits, limited scope audits, or other means. Further, federal agencies could conduct or arrange for audits of the entities. The act’s current criteria for selecting programs to be covered as part of a single audit focuses solely on dollars expended and does not consider all risk factors. In our 1994 report, we noted that less than 20 percent of the programs in our sample met the selection criteria regardless of whether they would be considered high risk. However, those few programs provided 90 percent of the entities’ federal expenditures. At the same time, programs that could be considered risky because of their complexities, changed program requirements, or previously identified problems would not have to be covered. The proposed amendments would require OMB to develop a risk-based approach to target audit resources at the higher risk programs as well as focusing on the dollars expended. We strongly support this change and note that the overwhelming majority of federal managers we interviewed agreed with this proposal. The proposed amendments include two primary changes to enhance the content and timeliness of single audit reports. First, single audit reports contain a series of as many as seven or more separate reports, and significant information is scattered throughout the separate reports. Presently, there is no requirement for a summary although several state auditors (for example, California’s state auditor) prepare summary reports. In this regard, as discussed in our 1994 report, 95 percent of the federal program managers we interviewed were very supportive of summary reports. Managers said that a summary report would save them time and enable them to more quickly focus on the most important problems the auditors found. The proposed amendments address this need by requiring auditors to provide a summary of their determinations concerning the audited entity’s financial statements, internal controls, and compliance with federal laws and regulations. We support their enactment. Second, entities now have 13 months from the end of the fiscal year to submit their single audit reports to the federal government. The proposed amendments would shorten this to 9 months. The amendments would require OMB to establish a transition period of at least 2 years for entities to comply with the shorter time frame. After the transition period, federal agencies could authorize an entity to report later than 9 months, consistent with criteria issued by OMB. We strongly support these provisions. Of the officials we surveyed, 84 percent of the federal program managers and 64 percent of the state program managers believe the 13-month time frame is excessive. Moreover, in fiscal year 1991, 44 percent of state and local governments were able to submit their reports within 9 months after the end of their fiscal years. Over time, I hope that it will be the rule, rather than the exception, for the audit reports to be submitted in less than 9 months. The proposed amendments would also expand the Single Audit Act to include nonprofit organizations, thereby placing all entities receiving federal funds under the same ground rules. Presently, the Single Audit Act applies only to state and local governments while nonprofit organizations are administratively required to have single audits under OMB Circular A-133, “Audits of Institutions of Higher Education and Other Nonprofit Organizations.” OMB is in the final stages of revising Circular A-133 to parallel the requirements of the proposed amendments to the Single Audit Act. The proposed amendments would provide a statutory basis for consistent, common requirements for state and local governments and nonprofit organizations. We strongly support this change. The proposed amendments would also reinforce one of the goals of the act to use single audits as the foundation for other audits. Combined with summary reporting, the ability of federal agencies to review single audit working papers, and make necessary copies, can provide valuable information in their oversight of federal assistance programs. In closing, a number of organizations have worked for some time in gaining consensus on how to make the single audit process as efficient and effective as possible. The proposed amendments you are now considering represent that consensus and have broad support among stakeholder groups, including the National State Auditors Association and the President’s Council on Integrity & Efficiency which represents the federal inspectors general. The Single Audit Act has been very successful. The amendments build on that success based on lessons learned and changed conditions over the past 12 years. We encourage the enactment of the proposed amendments and commend the Subcommittee for focusing on this important issue. Mr. Chairman, we would be pleased to work with the Subcommittee as it considers the amendments to the Single Audit Act. I would be happy to answer any questions that you or members may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed proposed amendments to the Single Audit Act of 1984 and the act's importance. GAO noted that: (1) Congress enacted the act in response to state and local governments' poor accounting practices and lack of accountability for federal funds; (2) audits were not uniform and some grantees were subjected to multiple annual audits while others were not audited for long periods of time; (3) state and local governments have greatly improved their accountability and financial management under the act; (4) proposed amendments would reduce administrative burdens on grantees who receive comparatively small amounts by raising audit thresholds so that audit coverage returns to the 95-percent level; (5) grantees below the thresholds would still have to maintain records and be subject to monitoring; (6) the amendments require the Office of Management and Budget to develop a risk-based approach to targeting audit resources at higher-risk programs; (7) the amendments' required summary reports would increase audit timeliness and usefulness; (8) shortening the audits' due date to 9 months from the fiscal year's close would also improve the audits' timeliness; (9) bringing nonprofit organizations under the act would subject all grantees to uniform requirements; and (10) the proposed amendments would make the single audits the basis for other audits.
|
The process for importing products into the United States involves several different private parties, as well as the U.S. government. These private parties include exporters, carriers, and importers, among others. Exporters are companies that sell goods manufactured or produced in foreign countries to the United States. Carriers are companies that transport the goods to the United States. Importers may be companies that purchase the goods from exporters or simply may be responsible for facilitating the importation of the goods. The importer of record is responsible for paying all estimated duties, taxes, and fees on those products when they are brought into the United States. Importers of record are also required to obtain a general bond to secure the payment of their financial obligations. CBP is responsible for, among other things, managing the import process (see fig. 1); collecting the duties, taxes, and fees assessed on those products; and setting the formula for establishing importers’ general bond amounts. The United States and many of its trading partners have established laws to remedy the unfair trade practices of other countries and foreign companies that cause injury to domestic industries. U.S. laws authorize the imposition of AD duties on imports that were “dumped” (i.e., sold at less than normal value) and CV duties on imports subsidized by foreign governments. As we reported in March 2008, the U.S. AD/CV duty system is retrospective, in that importers initially pay estimated AD/CV duties at the time of importation, but the final amount of duties, reflecting the actual amount of dumping or subsidization, is not determined until later. Commerce is responsible for calculating the appropriate AD/CV duty rate. CBP is then responsible for collecting the estimated AD/CV duties when goods enter the United States, and subsequently processing the final AD/CV duties (called “liquidation”) when instructed by Commerce. Liquidation may result in providing importers with a refund or sending an additional bill. A wide range of imported goods are subject to AD/CV duties, such as agricultural, chemical, steel, paper, and wooden products. Each set of AD/CV duties—detailed in an AD/CV duty order—is for a type of product from a specified country. The written “scope” of each AD/CV duty order describes the specific type of product that is subject to the duties. The duty order also lists one or more Harmonized Tariff Schedule codes associated with the product. There are duty orders in place for some types of products from several countries. For example, there are currently AD duty orders on frozen warmwater shrimp from five countries—Brazil, China, India, Thailand, and Vietnam. For some other types of products, there is a duty order in place on only one country, such as saccharin from China. As of March 2012, there were 283 AD/CV duty orders in effect, with more duty orders on products from China than from any other country (see table 1). Importers that seek to avoid paying appropriate AD/CV duties may attempt to evade them by using a variety of techniques. These techniques include illegal transshipment to disguise a product’s true country of origin, undervaluation to falsify the price of an import to reduce the amount of AD/CV duties owed, and misclassification of merchandise such that it falls outside the scope of an AD/CV duty order, among others (see fig. 2). According to CBP, importers sometimes use more than one evasion method at a time to further disguise the fact that they are importing goods subject to AD/CV duties. Because the techniques used to evade AD/CV duties are clandestine, the amount of revenue lost as a result is unknown. CBP detects and deters AD/CV duty evasion through a three-part process that involves (1) identifying potential cases of evasion, (2) attempting to verify if evasion is occurring, and (3) taking enforcement action. CBP begins its detection of AD/CV duty evasion by identifying potential instances of evasion, using two primary sources of information: import data and allegations from external sources. Import data is generated from the documents submitted by importers as part of the import process. Allegations are collected electronically through e-Allegations, an online system created by CBP in 2008. CBP also collects allegations via other means (such as telephone and e-mail, among others) and stores them in the e-Allegations system. As of September 2011, there were almost 400 allegations related to AD/CV duty evasion in the e-Allegations system, mostly from sources associated with affected industries. To look for anomalies that may be indicators of evasion, CBP personnel at both the local and national levels conduct targeting, analyze trends in Local import data, and follow up on allegations from external sources. targeting and analysis is conducted by CBP personnel stationed at more than 300 ports of entry, while national targeting and analysis is conducted by officials at CBP headquarters and its National Targeting and Analysis Group (NTAG) for AD/CV duty issues located in Plantation, FL. CBP officials explained that most of their targeting involves identifying entries filed under the Harmonized Tariff Schedule codes associated with a given product that is subject to AD/CV duties and then examining the import documentation for those entries for anomalies that may suggest evasion is occurring. Examples of such anomalies in import documents include, but are not limited to being filed under the same tariff code as a product that is subject to AD/CV duties but not being declared as subject to such duties, listing a country of origin that is not capable of producing the goods (or the quantity of the goods) imported—a potential indicator of illegal transshipment, and showing a monetary value for the goods imported that appears to be too low for the quantity or weight of goods imported—a possible sign of undervaluation. Once CBP identifies a potential instance of evasion, it can use a variety of techniques at different points in the import process to attempt to verify if evasion is occurring. These include, but are not limited to, the following: targeting additional shipments made by the importer of record and conducting further data analysis to look for other anomalies that may be evidence of evasion; requesting that the importer provide further information, such as product invoices and other documents that can help CBP understand the transactions involved in producing and importing a good and ascertain if evasion occurred; sending referrals to ICE to initiate criminal investigations and gather evidence of evasion from foreign countries, such as by visiting production facilities overseas and collecting customs documents from foreign counterparts; performing cargo exams to inspect shipments arriving at ports of collecting samples of products potentially brought in through evasion and conducting laboratory analysis of these samples to attempt to identify their true country of origin and other technical details that can help CBP determine if the products should be subject to AD/CV duties; and auditing importers suspected of evading AD/CV duties by collecting company records (such as purchase orders, shipping documents, and payment records) and examining them for discrepancies. Figure 3 shows where in the import process CBP typically uses these techniques. In cases where CBP is able to verify evasion, its options for taking enforcement action to deter evasion include (1) pursuing the collection of evaded duties, (2) imposing civil penalties, (3) conducting seizures, and (4) referring cases to ICE for criminal investigation. As we discuss later in this report, CBP lacks complete data on the amount of evaded duties it has pursued and collected in cases of evasion. From fiscal years 2007 to 2011, CBP assessed 252 civil penalties totaling about $208 million against 237 importers that evaded AD/CV duties. Over the same period, CBP also made 33 seizures related to AD/CV duty evasion, with a total domestic value of nearly $4 million. In instances where CBP suspects that criminal laws may have been violated, it can refer cases to ICE for criminal investigation. Between fiscal years 2007 and 2011, ICE investigations of AD/CV duty evasion led to 28 criminal arrests, 85 indictments, and 37 criminal convictions. As discussed later in this report, some products that are subject to AD/CV duties fall under the same tariff codes as other products that are not subject to AD/CV duties. Consequently, the tariff code for an entry may be insufficient for CBP to determine whether or not the entry is subject to AD/CV duties; additional information may be needed. wood flooring and were also preparing to assess civil penalties on 14 importers. At another port we visited, CBP officials described a case that began with an anonymous fax alleging evasion of the AD duties on steel nails from China. After reviewing import data, the officials were able to confirm that the importer named in the allegation had brought an entry of steel nails into their port and that the importer’s broker had filed the entry as not subject to AD duties. Because the AD duty order on steel nails from China provides an exemption for roofing nails, the port officials then sent a formal request for information to the importer to ask for a sample of the The port officials sent steel nails imported, which the importer provided.the sample to a CBP laboratory to determine if the nails provided were roofing nails or not. After the laboratory determined that the sample nails were not roofing nails, the port officials concluded that the steel nails were subject to the AD duty order and, consequently, should have been declared as such. The officials subsequently told us that this would result in a penalty against the importer and that 34 additional entries by the importer at six ports were also under review for evasion. Two types of factors affect CBP’s efforts to detect and deter AD/CV duty evasion. First, CBP faces several external challenges in attempting to gather conclusive evidence of evasion and deter parties from evading duties. Second, although interagency communication has improved, and CBP has encouraged the use of higher bonding requirements to protect revenue, gaps in information sharing with Commerce and within CBP may limit the effectiveness of these initiatives. Several challenges mostly outside of CBP’s control impede its efforts to prove that evasion has occurred and deter parties from evading AD/CV duties. These challenges include (1) the inherent difficulty of verifying evasion conducted through clandestine means; (2) limited access to evidence of evasion located in foreign countries; (3) the highly specific and sometimes complex nature of products subject to AD/CV duties; (4) the ease of becoming an importer of record, which evaders can exploit; and (5) the limited circumstances under which CBP can seize goods brought in through evasion. CBP officials we met with stated that verifying evasion of AD/CV duties is one of the agency’s most challenging and time-consuming trade enforcement responsibilities. As these officials emphasized, proving that evasion is occurring is a key precondition for taking enforcement action against importers evading AD/CV duties. However, because AD/CV duty evasion is inherently deceptive and clandestine in nature, it can be extremely difficult for CBP to gather conclusive evidence to prove that evasion is occurring. According to CBP, not only can different methods of evasion be employed at once—often involving the collusion of several parties, including the manufacturer, shippers, and importer— but entities engaging in evasion are using increasingly complex schemes. In particular, CBP officials identified the growing use of illegal transshipment as a key concern, noting that the Internet has made it very easy for importers to find companies willing to transship goods subject to AD/CV duties through third countries to mask the goods’ true country of origin. Because such schemes often involve adding false markings and packaging designed to mimic legitimate production in other countries, it can be very difficult for CBP to determine a product’s country of origin through visual inspection or through reviews of shipping documents. Undervaluation can be similarly difficult to prove, according to CBP, especially if the producer and importer collude to create false values. In addition to being inherently difficult, verifying evasion of AD/CV duties can also be very time-consuming. According to CBP, it can easily take over a year or more to collect the evidence needed to verify a potential case of evasion. For example, CBP’s ability to target additional shipments from an importer suspected of evading duties hinges on whether or not importation is ongoing. However, CBP documentation notes that shipments of some goods may be seasonal in nature, resulting in months of inactivity until the next shipment can be targeted. Additionally, in cases where CBP requests additional information from the importer, the importer has 30 days in which to respond to the request, but CBP can extend the deadline in additional 30-day increments if the importer fails to respond or needs more time to gather the required information. Similarly, according to CBP, it typically takes up to 30 days to conduct a laboratory analysis of a product sample, but it can take up to 120 days if, for instance, new analytical methods need to be developed. CBP officials stated that their audits of importers suspected of evading AD/CV duties are also time-consuming in nature, taking nearly 8 months to complete on average. Given these timelines— and the fact that CBP may need to use several such verification techniques to successfully prove a single case of evasion—the process of proving evasion may become quite lengthy. According to CBP and ICE officials, they have limited access to evidence located in foreign countries that can be vital to proving that evasion has occurred, particularly in cases of illegal transshipment. These officials explained that collecting customs documents from foreign counterparts or gaining access to facilities in a foreign country listed as the country of origin for a suspicious entry can help them prove that the goods in question originated elsewhere. For example, ICE officials investigating a case concerning Chinese honey suspected of being illegally transshipped through Thailand helped determine that evasion occurred, in part by visiting the sites in Thailand where the honey was allegedly produced and determining that the facilities were not honey manufacturing plants (see fig. 5). Similarly, CBP laboratory scientists explained that their ability to use chemical analysis to determine whether an importer falsely declared a good’s country of origin is contingent on gathering reference samples from as many countries as possible for comparison purposes. To collect information located outside of U.S. jurisdiction, CBP and ICE need to obtain the permission of host nation governments. However, both CBP and ICE explained that the level of host nation cooperation varies. According to ICE, even when the United States has bilateral agreements in place to share customs information, the extent of information shared by foreign counterparts varies by country. For example, ICE officials stated that although most of their investigations of evasion involve goods from China—with which the United States has a customs cooperation agreement in place—they have never received permission to visit facilities in China as part of their investigations. Similarly, according to ICE officials, although the United States has bilateral agreements with several countries that are thought to be common transshipment points— such as Indonesia, India, and the Philippines—ICE’s ability to visit these and other countries during the course of investigations depends on factors such as each country’s political climate, the nature of its bilateral relationship with the United States, and the extent to which the host nation government has ties to the company or industry under investigation. CBP laboratory scientists have also had mixed results in gaining access overseas. They noted that the Indonesian government recently allowed them access to collect samples of shrimp from Indonesian producers. However, the Malaysian government initially gave them approval to visit honey and shrimp producers in their country but ultimately rescinded its approval without explanation. CBP officials also noted that although the U.S. free trade agreement with Singapore— another country thought to be a common transshipment point—allows for cooperation on customs issues, the agreement explicitly excludes matters related to AD/CV duties. According to CBP officials, the highly specific and complex nature of some products subject to AD/CV duties can make it extremely difficult to identify evasion. As noted earlier, most of CBP’s targeting for potential evasion involves examining entries that have the same Harmonized Tariff Schedule codes as products subject to AD/CV duties in order to look for any not filed as subject to AD/CV duties. For example, to target potential evasion of the AD duties on saccharin from China, CBP can examine entries from China that have the tariff code for saccharin and determine if any have been filed as not subject to AD/CV duties. However, in some cases, no unique tariff code exists for the specific products that Commerce investigated and issued a duty order for; rather, these products fall under the same tariff code as a broader category of products that are not subject to AD/CV duties. Consequently, the tariff codes listed on a given entry may be insufficient for CBP to determine if goods imported as part of that shipment are subject to AD/CV duties; additional information may be needed. An example is petroleum wax candles from China, which are subject to AD duties. Because there is no specific tariff code for petroleum wax candles—only one for candles—CBP cannot conclude, absent other evidence, that an entry from China under the tariff code for candles is petroleum wax candles, as it may be another type of candle that is not subject to AD duties. Instead, CBP has to turn to other means of verification to attempt to gather conclusive evidence that the entry is petroleum wax candles and, therefore, subject to AD duties. For example, CBP may decide to ask the importer for additional information, such as product invoices containing further details on the type of candles imported. CBP may also target additional shipments of candles and potentially collect a sample for laboratory analysis. However, as described earlier, each of these steps would take additional time, lengthening the verification process. According to CBP officials, the complex nature of some products covered by AD/CV duty orders can also make it difficult for CBP personnel to visually identify the products during cargo exams. For instance, CBP officials stated that AD/CV duty orders on steel typically cover steel products with a certain chemical composition—an aspect that cannot be determined through visual inspection. Another example is the AD/CV duty order on honey, which applies not only to natural honey and flavored honey, but also to honey blends that contain more than 50 percent natural honey by weight—a characteristic that cannot be ascertained by sight alone. In such cases, CBP personnel can extract a sample from the shipment and send it for laboratory analysis. However, CBP laboratory scientists stated that chemical analysis does not always return a definitive judgment of whether or not a product sample analyzed should fall within the scope of an AD/CV duty order. For example, chemical analysis of a honey blend can return inconclusive results if certain additives are present in the blend. CBP officials stated that CBP cannot take enforcement action without conclusive proof of evasion. Entities engaging in evasion can exploit the ease of becoming an importer of record, impeding CBP’s ability to target and take deterrent action against them. As noted earlier, importers of record are responsible for paying all estimated duties, taxes, and fees on products when they are brought into the United States. However, importers seeking to evade AD/CV duties can exploit the ease of becoming an importer of record in several ways. First, according to CBP officials, companies can easily adopt new importer names and identification numbers, making it difficult for CBP to track their importing activity and gather evidence needed to prove that they are engaging in evasion. CBP officials stated that they suspect some importers evading AD/CV duties set up new names and identification numbers in advance to have ready for use in anticipation of CBP targeting efforts. Second, as our prior work has noted, CBP collects a minimal amount of information from companies applying to be importers of record, which evaders can take advantage of to elude CBP efforts to For instance, companies are not locate and collect revenues from them.subject to any credit or background checks before being allowed to import products into the United States. Third, foreign companies and individuals are allowed to import products into the United States, but CBP can have difficulty collecting duties and penalties from foreign importers—especially illegitimate ones—when the importers have no attachable assets in the United States. For example, as of February 2012, CBP had collected about $5 million, or about 2 percent, of the approximately $208 million it assessed in civil penalties between fiscal years 2007 and 2011. CBP attributed its collection difficulties, in part, to challenges experienced in collecting from foreign importers of record. CBP officials stated that, due to this risk of noncollection, a factor they consider when deciding whether or not to impose a penalty against a confirmed evader is whether or not it has assets in the United States. As we have previously reported, CBP or Congress could heighten the requirements for becoming an importer of record; however, such action could lead to unintended consequences.could include mandatory financial or background checks. However, performing these checks would create a significant new burden on CBP, which would need to conduct or oversee these financial or background checks. Additionally, it is possible that, to ensure fairness, the heightened requirements would be imposed on all importers. Given that the vast majority of importers comply with customs laws and pay their duty liabilities, such a broad approach may not be cost-effective and could potentially restrict trade. CBP is able to seize goods imported through evasion under limited circumstances. CBP officials explained that unlike goods that are illegal to import, such as those violating import safety or intellectual property laws, goods imported through evasion are not necessarily illegal to import. Specifically, according to CBP, although misclassification and undervaluation are commonly used evasion schemes, U.S. trade law limits the seizure of shipments that are misclassified or undervalued. By contrast, CBP is permitted to seize shipments brought in through other forms of evasion, such as through falsifying the country of origin of goods (illegal transshipment) or failing to declare goods on entry documents (smuggling).and 2011, at least 28 were related to false country of origin or smuggling. For instance, CBP officials at one port seized a shipment of plastic bags following a cargo exam that revealed the shipment’s country of origin had been falsified. However, as CBP has testified before Congress, entities engaging in evasion often use false markings and packaging that make it very difficult to determine country of origin through visual examination alone, complicating the task of establishing grounds for seizure. Moreover, as noted earlier, verifying evasion is an inherently difficult and time-consuming process. CBP officials stated that, by the time CBP is able to verify an instance of evasion, the associated goods typically have already entered the United States and cannot be seized. Communication between Commerce and CBP has improved since our 2008 report on AD/CV duties, and CBP has encouraged port officials to use higher bonding requirements to protect AD/CV duty revenue when they suspect incoming shipments of evasion. However, CBP lacks information from Commerce that would enable it to better plan its workload and minimize the burden of the U.S. retrospective system on its efforts to address evasion. Additionally, CBP has neither a policy nor a mechanism in place for a port requiring a higher bond to share this information with other ports in case an importer attempts to “port-shop,” i.e., chooses to withdraw its shipment and attempts to make entry at another port in an attempt to avoid the larger bond requirement. CBP officials cited the administrative burden of the U.S. retrospective system as a factor that diminishes the resources they have available for detecting and deterring evasion of AD/CV duties. Under the U.S. retrospective system, importers that properly declare their products as subject to AD/CV duties (i.e., do not evade) pay the estimated amount of duties when products enter the United States, but the final amount of duties owed is not determined until later. The documentation for the entries remains at the ports while CBP awaits liquidation instructions conveying the final duty rate from Commerce. Commerce’s review to determine the final duty rate—a process that culminates with the issuance of liquidation instructions—typically takes up to 18 months to complete and can take months or years longer if litigation is involved. At one port we visited, CBP officials stated that they had approximately 20,000 entries awaiting instructions to liquidate for food-related products alone. At another port, officials showed us the file room where they store entries awaiting liquidation instructions (see fig. 6). Moreover, each of the thousands of entries subject to AD/CV duties must be liquidated through manual data entry, which is resource- and time-intensive and diverts CBP personnel from their efforts to detect and deter evasion. Under U.S. law, CBP has 6 months to liquidate entries from the time that According to it receives notice of the lifting of suspension of liquidation.CBP officials, this 6-month deadline can be very difficult to meet, especially when a large volume of imports needs to be liquidated.order to begin liquidating entries, CBP must first receive liquidation instructions from Commerce. Since our 2008 report, Commerce has taken steps to improve the transmission of its liquidation instructions to CBP. We found in 2008 that, about 80 percent of the time, Commerce failed to send liquidation instructions within its self-imposed deadline of 15 days after the publication of the Federal Register notice. Furthermore, we reported that the instructions were sometimes unclear, thereby causing CBP to take extra time to obtain clarification. Consequently, we identified untimely and unclear liquidation instructions from Commerce as an impediment to CBP’s ability to liquidate entries. In response to our recommendation to identify opportunities to improve liquidation instructions, Commerce took steps to improve the transmission of liquidation instructions to CBP. For instance, Commerce deployed a system for tracking when it sends liquidation instructions, which according to Commerce, has greatly improved its timeliness. Documentation from Commerce indicates that, in the first half of fiscal year 2012, Commerce sent liquidation instructions on a timely basis more than 90 percent of the time. In addition, Commerce and CBP jointly established a mechanism for CBP port personnel to submit questions directly to Commerce regarding liquidation issues. According to CBP officials, these steps have improved the ability of port personnel to ask Commerce to clarify its liquidation instructions. GAO-08-391. in May 2011 that, without advance notice from Commerce on upcoming liquidation instructions, it can be very difficult for CBP to make workforce planning and staffing decisions. CBP officials at headquarters and at ports we visited stated that liquidation instructions arrive with little warning but need to be acted on immediately due to the 6-month deadline for liquidating entries. They said that this sudden shift in workload diverts key personnel from efforts to address evasion to focus on manually liquidating thousands of entries instead. In the absence of advance notice from Commerce on upcoming liquidation instructions, CBP attempts to roughly estimate where its workload peaks will occur on the basis of the 18-month time frame within which Commerce typically completes liquidation instructions. However, CBP officials stated that no such estimation is possible in cases involving litigation, which are not subject to time frames. According to CBP, cases involving litigation are particularly burdensome because of the considerable length of time it can take to resolve some cases, during which an extremely large number of entries can accumulate at the ports—all of which CBP eventually has to attempt to liquidate within the 6-month deadline. However, Commerce does not currently inform CBP when a court reaches a decision on a case in litigation—information that would enable CBP to conduct some workload planning. According to CBP officials, since CBP is not a party to such cases, it would be helpful if Commerce provided them with some notification once decisions are reached. Commerce officials stated that they do not know when courts will reach decisions on cases in litigation, but said that they could work with CBP to identify opportunities to share information regarding the status of litigation. In response to a CBP request, Commerce recently provided CBP some information for the first time to help with workload planning. In June 2011, Commerce officials provided their counterparts in CBP headquarters with a list of instructions planned for issuance over the next 6 months. CBP officials at headquarters acknowledged receiving the list from Commerce, stating that, although the list did not address their need to know when courts reach decisions on cases involving litigation, they found it useful for general workload planning purposes. They noted that they would like to receive this type of list from Commerce on a quarterly basis to have more up-to-date information on hand to incorporate into their workload planning decisions. Commerce officials stated that they would be willing to work with CBP to develop a schedule for sharing this list on a regular basis. CBP has encouraged the use of higher bonding requirements, called single transaction bonds (STB), to protect AD/CV duty revenue from the risk of evasion; however, it has not ensured that a port requiring an STB shares this information with other ports in case an importer withdraws its shipment and attempts to make entry at another port to avoid the STB. As noted earlier, all importers are required to post a security, usually a general obligation bond, when they import products into the United States. This bond is an insurance policy protecting the U.S. government against revenue loss if an importer defaults on its financial obligations as well as ensuring compliance with the law. However, given CBP’s concerns that this general bond inadequately protects AD/CV duty revenue, CBP has encouraged port officials to protect additional revenue by requiring STBs for individual shipments they suspect of evasion. The amount of the STB is generally one to three times the total entered value of the merchandise plus duties, taxes, and fees, depending on the revenue risk. According to CBP officials, STBs serve as additional insurance in cases where CBP has not been able to collect enough evidence before a shipment’s arrival to prove that evasion is occurring, but where enough suspicion exists about the shipment to warrant protection of the anticipated AD/CV duty revenue. An importer that is required to obtain an STB can either choose to post the bond in order to enter its shipment, or can opt against obtaining the bond and withdraw its shipment. If an importer decides to post the STB, and CBP later confirms that AD/CV duties are indeed owed, CBP first tries to collect from the importer. However, if CBP is unable to collect from the importer, it can collect significantly more money from the surety (insurance) company that underwrote the STB than it would typically be able to collect from the surety on a general bond, given the larger amount of revenue protected by the STB. While CBP has encouraged the use of STBs to protect revenue related to imports suspected of AD/CV duty evasion, vulnerabilities exist due to gaps in port-level information sharing. CBP gives each port the discretion to decide when to require an STB. However, CBP has no policy or mechanism in place for ports requiring such a bond to share this information with other ports in case an importer attempts to port-shop, i.e., chooses to withdraw its shipment and attempts to make entry at another port in an attempt to avoid the larger bond requirement. Instead, CBP port officials currently rely on informal e-mail and telephone communication to notify other port officials of importers potentially seeking to port-shop. Officials we met with cited specific instances where this informal approach had been ineffective in notifying other ports of suspected evasion before the importer could enter the goods at another port. For example, CBP officials at one port described a case where an importer that decided against posting an STB at their port was able to make entry in another port before they were able to e-mail a warning about that particular importer to other ports. In another case, an importer succeeded in entering a shipment of furniture in Newark after officials at the initial port of entry on the West Coast failed to notify other ports that the importer had decided to withdraw its entry instead of posting an STB. In both cases, CBP port officials suspected evasion but did not take additional action in time to warn other ports of entry about the potential for port-shopping. Although CBP is currently formulating policy to guide the use of STBs, the policy may not fully address the risk of port-shopping. In February 2012, CBP officials stated that they were in the process of completing a policy that will further encourage port officials to use STBs and provide them with guidance on circumstances under which the use of STBs is Officials stated that the policy will also instruct officials at a appropriate.port requiring an STB to review any other shipments from the importer in question before releasing them. They added that they had not yet decided whether or not to automatically instruct ports nationwide to conduct the same level of review. While CBP has improved its performance measures for addressing AD/CV duty evasion and enhanced its monitoring of STBs, it does not systematically track or report key outcome information that CBP leadership and Congress could use to assess and improve CBP’s efforts to detect and deter AD/CV duty evasion. First, CBP cannot readily produce key data on AD/CV duty evasion, such as the number of confirmed cases of evasion, which it could use to better inform and manage its efforts. Second, CBP does not consistently track or report on the outcomes of allegations of evasion it receives from third parties. As we have previously reported, internal control is a major part of managing an organization and should be generally designed to assure that ongoing monitoring occurs in the course of normal operations. Furthermore, our prior work has noted the need for agencies to consider the differing information needs of various users, such as agency top leadership and Congress. Specifically, as we reported in March 2011, the Government Performance and Results Modernization Act of 2010 underscores the importance of ensuring that performance information will be both useful and used in decision making. In the past year, CBP has made enhancements in the following two areas to track its efforts related to combating AD/CV duty evasion: CBP has taken steps to improve the performance measures for its efforts to detect and deter AD/CV duty evasion. CBP told us that in fiscal years 2010 and 2011, a majority of the performance measures for AD/CV duty enforcement either lacked sufficient data or were declared to be “not measurable.” For example, CBP considered one measure for fiscal year 2011—”analysis completed and enforcement alternatives concurred”— too broad to collect data and report on, given the large number of CBP offices that conduct analysis and enforcement. In another example, CBP did not provide a response to the fiscal year 2011 performance measure related to the results of cargo exams because, according to CBP officials, cargo exams are conducted at the local level and not tracked, creating a dearth of reportable data. In addition, CBP was unable to track and assess its efforts over time because its measures were inconsistent from year to year. By contrast, CBP’s fiscal year 2012 action plan includes a new set of performance metrics with measurable targets consistent from fiscal years 2012 through 2017. For example, the performance measure for penalties issued has targets to increase the amount of penalties issued each year by 10 or 15 percent. There are similar measures with targets for increasing the percentage of AD/CV duties collected and the number of audits related to AD/CV duties. CBP is working to improve its ability to track and report on the use of STBs. In June 2011, after finding that CBP could not determine the total number of STBs used at the ports, the Department of Homeland Security Inspector General recommended that CBP appoint a centralized office responsible for reporting STB-related activities and monitoring results. The Inspector General’s report also recommended that CBP automate the STB process to provide enhanced tracking ability. CBP concurred with these recommendations, stating that it had begun the process of centralizing STB-related roles and responsibilities and developing a system to automate the STB process. Moreover, one of the new measures in the fiscal year 2012 action plan tracks the number of STBs used for AD/CV duty evasion. While CBP has reported anecdotes about its successes in addressing AD/CV duty evasion and collects some statistics on its efforts, it lacks key data that it could use to assess and improve its management practices and that could enhance congressional oversight. Over the past year, CBP has publicly reported anecdotes of successful efforts to detect and deter AD/CV duty evasion. For example, in testimony before Congress in May 2011, the Assistant Commissioner for CBP’s Office of International Trade described five recent cases where CBP and ICE uncovered instances of evasion and penalized those responsible. Similarly, in a report to Congress on fiscal year 2010 efforts to enforce AD/CV duties, CBP cited eight cases that led to enforcement action against parties engaging in evasion. CBP has also produced publicly available videos illustrating a successful case where CBP worked with ICE to arrest and convict an importer who evaded the AD/CV duties on wire hangers. CBP collects some statistics on its efforts to detect and deter AD/CV duty evasion but lacks other key data on these efforts. For example, CBP provided us with statistics on civil penalties and seizures related to AD/CV duty evasion. However, CBP lacks data on the total number of confirmed cases of AD/CV duty evasion; the total amount of duties assessed and collected for confirmed cases of evasion; the country of origin, product type, and method of evasion for each confirmed case of evasion; and the number of confirmed cases of evasion involving a foreign importer of record. CBP attributed this lack of data to the absence of a policy requiring officials to record confirmed cases of AD/CV duty evasion. CBP officials explained that although CBP has a database in which instances of evasion could be recorded, current policy does not require officials to record such instances. Consequently, CBP cannot conduct a simple data query to identify all confirmed cases of evasion. Without the ability to identify cases of evasion, CBP cannot easily access other related data on AD/CV evasion that could help improve management decisions and oversight. For example, CBP is currently unable to produce data on the total amount of duties assessed and collected for confirmed cases of evasion—figures that would provide CBP leadership and Congress visibility over some of the results of CBP’s efforts to address evasion. Similarly, comprehensive data on the country of origin, product type, and method of evasion for each confirmed case of evasion could potentially help CBP identify trends and shifts in evasive activity and make adjustments accordingly. CBP also lacks complete data on the country of origin and product type associated with the 252 civil penalties it imposed for AD/CV duty evasion between fiscal years 2007 and 2011 (see fig. 7). CBP attributed these missing data items to CBP personnel not recording them in CBP’s automated system for tracking penalties. Due to these missing data items, CBP lacks a complete picture of the countries and commodities involved in its penalty cases—information it could use to guide and improve its efforts. For example, CBP could identify which types of commodities have led to penalties most often and decide whether or not to focus more resources and detection efforts on those types of commodities. According to CBP officials, CBP addresses all allegations of AD/CV duty evasion it receives, including e-Allegations received online, but it does not routinely track or report on the outcomes of these allegations. As a result, Congress and industry stakeholders lack information about the outcomes of the allegations, which both parties have cited as a cause of concern. Data from CBP indicate that it generally assigns allegations to its national targeting staff for AD/CV duty issues (i.e., the NTAG) within 2 days of receipt. The NTAG then assesses the validity of the allegation using targeting and other analytical tools. If the NTAG determines that the allegation may be valid, it will typically refer the allegation to the appropriate port or to ICE for further investigation and possible enforcement action. As of September 2011, CBP had confirmed or referred nearly one-quarter of the approximately 400 allegations it received from 2008 to August 2011. About half could not be validated, and another one-quarter were still under analysis. Although CBP has stated that it addresses all allegations of AD/CV duty evasion it receives, it has reported little information to date on the outcomes of its efforts to follow up on these allegations. For instance, CBP’s report to Congress on AD/CV duty enforcement efforts in fiscal year 2010 mentions that CBP has received hundreds of allegations from the trade community, but the report includes no information on the outcomes of those allegations. In January 2011, in response to a congressional request, CBP produced a spreadsheet of the allegations it had received since June 2008. CBP officials told us that this spreadsheet was created upon request and is not something CBP updates or uses for management or policy purposes. While this document lists certain details, such as the source of each allegation, and identifies allegations of evasion that CBP confirmed as valid, it does not include any information on the associated enforcement outcomes. During the course of our review, CBP provided us with expanded versions of the spreadsheet in response to our request for details on the results of the allegations. However, these expanded versions provide little insight into the results of the allegations. For instance, the most recent version of the spreadsheet that we received, from September 2011, documents the enforcement outcome for only one of the 24 allegations labeled as “allegation confirmed.” CBP was also unable to determine if the allegations referred to ports and ICE by the NTAG were subsequently confirmed as valid or resulted in enforcement outcomes. CBP’s limited reporting on the outcomes of allegations is due, in part, to inconsistent, decentralized tracking of such information. CBP officials stated that once the NTAG has referred an allegation to a port or to ICE for further action, CBP considers the allegation to be closed and may or may not follow up to track its outcome. While CBP creates a record within its Commercial Allegation and Reporting System for each allegation it receives, there is no requirement for either the NTAG or the entity receiving the allegation referral to update these records with details on its enforcement outcomes. Instead, port officials and ICE store information on enforcement outcomes in other data systems that are not linked to the Commercial Allegation and Reporting System. CBP officials at headquarters told us that aggregating data from these various systems to link allegations with their associated outcomes would be difficult and time- consuming. Additionally, according to ICE, it does not specifically track cases generated as a result of allegations referred by CBP. Consequently, since ICE cannot identify which of its cases involve allegations referred from CBP, it also cannot identify the associated outcomes. An additional cause of CBP’s limited reporting on the outcomes of allegations is legal restrictions on the types of information it can share. During our review, we met with representatives of a coalition of domestic industries affected by AD/CV duty evasion. Some of these representatives stated that they had submitted allegations of evasion to CBP and expressed frustration that although they had requested updates from CBP on the outcomes of the allegations they submitted, CBP had not provided them with the information requested. CBP officials attributed this, in part, to the Trade Secrets Act, which they said restricts their ability to disclose the specific kinds of information requested. Additionally, CBP officials stated that they cannot disclose information about allegations involving active ICE investigations. Furthermore, CBP does not report on the results of its efforts at an aggregate level, which would avoid divulging restricted information while keeping key stakeholders informed. CBP officials stated that they are currently exploring ways to legally share what information they can on allegations with the parties that filed them. Evasion of AD/CV duties undermines U.S. AD/CV duty laws—the intent of which is to level the economic playing field for U.S. industry—and deprives the U.S. government of revenues it is due. While CBP employs a variety of techniques to detect and deter such evasion, its efforts are significantly hampered by a number of factors primarily beyond its control. These include the inherently difficult and time-consuming process of uncovering evasive activity conducted through clandestine means, inconsistent access to foreign countries that limits CBP’s ability (as well as ICE’s) to gather necessary evidence, and the ease with which importers attempting to evade duties can change names and identification numbers to elude detection. Nonetheless, some improvements have been made since we last reported, including better communication between Commerce and CBP and CBP’s encouragement of the use of higher bonding requirements to protect additional AD/CV duty revenue in instances where it suspects evasion. However, CBP lacks information from Commerce that it needs to better plan its workload and mitigate the impact of the time- and resource-intensive liquidation process on its efforts to address evasion. Further, CBP has no policy or mechanism for port officials to minimize the risk of port-shopping by notifying other ports about their use of higher bonding requirements. Unless these gaps in information sharing are closed, these recent initiatives may be compromised, thereby limiting the effectiveness of CBP’s efforts to address AD/CV duty evasion. CBP has also made some improvements in managing its efforts to address AD/CV duty evasion, including by developing better performance measures and monitoring its use of higher bonding requirements. However, it lacks key data on AD/CV duty evasion, including on confirmed cases of evasion and penalties, which could help it assess and improve its approach to addressing evasion and also inform agency and congressional decision makers about its efforts. Moreover, CBP has neither tracked nor reported the outcomes of the allegations of evasion it has received from third parties. Without improved tracking and reporting, agency leadership, Congress, and industry stakeholders will continue to have insufficient information with which to oversee and evaluate CBP’s efforts. To enhance CBP’s efforts to address AD/CV duty evasion and facilitate oversight of these efforts, we make the following recommendations: First, to help ensure that CBP receives the information it needs from Commerce to plan its workload and mitigate the impact of the liquidation process on its efforts to address evasion, the Secretary of Commerce should work with the Secretary of Homeland Security to identify opportunities for Commerce to regularly provide CBP advance notice on liquidation instructions, and notify CBP when courts reach decisions on AD/CV duty cases in litigation. Second, to help minimize the risk of port-shopping by importers seeking to avoid higher bond requirements, the Secretary of Homeland Security should direct CBP to create a policy and a mechanism for information sharing among ports regarding the use of higher bond requirements. Third, to inform CBP management and to enable congressional oversight, the Secretary of Homeland Security should ensure that CBP develop and implement a plan to systematically track and report on instances of AD/CV duty evasion and associated data—such as the duties assessed and collected, penalties assessed and collected, and the country of origin, product type, and method of evasion for each instance of evasion—and the results, such as enforcement outcomes, of allegations of evasion received from third parties. We provided a draft of this report to the Secretary of the Department of Homeland Security, the Secretary of Commerce, and the Secretary of the Treasury for their review and comment. We received technical comments from the Departments of Homeland Security, Commerce, and Treasury, which we incorporated where appropriate. We also received written comments from the Departments of Homeland Security and Commerce, which are reprinted in appendixes II and III, respectively. The Department of the Treasury did not provide written comments. In commenting on a draft of this report, the Department of Homeland Security concurred with our recommendations addressed to the department that CBP (1) create a policy and a mechanism for information sharing among ports regarding the use of higher bond requirements and (2) develop and implement a plan to track and report systematically instances of AD/CV duty evasion and the results of CBP’s enforcement actions. The Department of Commerce generally concurred with the recommendation addressed to the department to work with CBP to identify opportunities for Commerce to (1) regularly provide CBP with advance notice of liquidation instructions and (2) notify CBP when courts reach decisions on AD/CV duty cases in litigation. In its response, Commerce stated that both CBP and Commerce receive copies of injunctions from the U.S. Court of International Trade and attached a copy of a preliminary injunction to demonstrate how both agencies are generally served copies of the injunctions. However, when a court orders an injunction, such as the one Commerce provided, Commerce and CBP are enjoined from issuing liquidation instructions or otherwise causing or permitting liquidation of the entries that are the subject of the litigation. As a result, the injunction does not provide CBP with the information it needs to help with workload planning because it is not a court action that constitutes notice of the lifting of a suspension of liquidation, which would start the 6-month period in which CBP must liquidate entries. While an injunction can provide CBP information to help with workforce planning, it does not address CBP’s concern for regular advance notice of forthcoming liquidation instructions. CBP needs information from Commerce on when final court decisions are reached to help enable the agency to better plan its workload and help mitigate the administrative burden it faces in processing AD/CV duties—an effort that diminishes the resources it has available to address evasion. We are sending copies of this report to the appropriate congressional committees, the Departments of Homeland Security, Commerce, the Treasury, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-4101 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To examine how the Department of Homeland Security’s U.S. Customs and Border Protection (CBP) detects and deters the evasion of antidumping and countervailing (AD/CV) duties, we examined agency documents that outline CBP’s process and methods for identifying evasion of AD/CV duties; reviewed laws and other documents that identify the enforcement options CBP uses to deter evasion; and analyzed data from CBP and U.S. Immigration and Customs Enforcement (ICE) on deterrence activities such as civil penalties, seizures, criminal arrests, indictments, and criminal convictions. To identify factors that affect CBP’s efforts to detect and deter AD/CV duty evasion, we examined CBP documents that highlight the challenges and the timeline associated with verifying evasion; analyzed data on the amount of civil penalties CBP has collected from importers evading AD/CV duties; and reviewed legislation governing CBP’s use of seizures, internal memos on the use of single transaction bonds, and previous GAO reports on AD/CV duties. To assess the extent to which CBP tracks and reports on its efforts to detect and deter AD/CV duty evasion, we reviewed CBP annual plans that identify its performance measures for addressing AD/CV duty evasion; documents that show CBP’s performance against these measures; CBP testimony and videos publicizing successful efforts to address evasion; a CBP report to Congress on fiscal year 2010 efforts to enforce AD/CV duties; and a report by the Department of Homeland Security Inspector General on CBP’s bonding process, including its use and tracking of single transaction bonds. Additionally, we analyzed data on civil penalties CBP has imposed for AD/CV evasion and allegations of evasion received from third parties. Additionally, in the Washington, D.C., area, we discussed our objectives with officials in CBP’s Offices of International Trade, Field Operations, and Intelligence and Investigative Liaison; ICE; and the Departments of Commerce and the Treasury, as well as a coalition of U.S. industries affected by AD/CV duty evasion. To obtain a more in-depth understanding of U.S. efforts to detect and deter AD/CV duty evasion, we conducted fieldwork at the ports of Miami, FL; Seattle, WA; and Los Angeles, CA. We selected the port of Miami due, in part, to its proximity to CBP’s National Targeting and Analysis Group (NTAG) for AD/CV duty issues; the port of Seattle due, in part, to the high number of civil penalties it imposed for AD/CV duty evasion over the last 5 years; and the port of Los Angeles because it processed the most imports subject to AD/CV duties, by value, of any U.S. port. At each port, we met with officials from CBP and ICE to discuss the efforts they undertake to detect and deter AD/CV duty evasion at their port, the challenges they face in detecting and deterring evasion, and the process they use to track and report the results of these efforts. We also met with representatives of the NTAG for AD/CV duty issues in Plantation, FL, to discuss their methods for detecting evasion, both through their own targeting efforts and through analyzing allegations of evasion they receive from third parties. To determine the reliability of the data we collected on AD/CV duty orders, civil penalties, seizures, ICE enforcement outcomes (i.e., arrests, indictments, and criminal convictions), and allegations received from third parties, we compared and corroborated information from different sources; checked the data for reasonableness and completeness; and asked agency officials how the data are collected, tracked, and reviewed for accuracy. Based on the checks we performed, our discussions with agency officials, and the documentation the agencies provided to us, we determined that the data we collected were sufficiently reliable for the purposes of this engagement. We conducted this performance audit from June 2011 to May 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Christine Broderick (Assistant Director), Aniruddha Dasgupta, Julia Jebo, Diahanna Post, Loren Yager, Ken Bombara, Debbie Chung, Martin De Alteriis, Etana Finkler, and Grace Lui made key contributions to this report. Joyce Evans, Jeremy Latimer, Alana Miller, Theresa Perkins, Jena Sinkfield, Sushmita Srikanth, Cynthia S. Taylor, and Brian Tremblay provided technical assistance.
|
The United States imposes AD/CV duties to remedy unfair foreign trade practices, such as unfairly low prices or subsidies that cause injury to domestic industries. Examples of products subject to AD/CV duties include honey from China and certain steel products from South Korea. Importers that seek to avoid paying appropriate AD/CV duties may employ methods of evasion such as illegally transshipping an import through a third country to disguise its true country of origin or falsifying the value of an import to reduce the amount of duties owed, among others. AD/CV duty evasion can harm U.S. companies and reduces U.S. revenues. CBP, within the Department of Homeland Security, leads efforts to detect and deter AD/CV duty evasion. GAO was asked to examine (1) how CBP detects and deters AD/CV duty evasion, (2) factors that affect CBPs efforts, and (3) the extent to which CBP tracks and reports on its efforts. To address these objectives, GAO reviewed CBP data and documents; met with government and private sector representatives in Washington, D.C.; and conducted fieldwork at three domestic ports. U.S. Customs and Border Protection (CBP) detects and deters evasion of antidumping and countervailing (AD/CV) duties through a three-part process that involves (1) identifying potential cases of evasion, (2) attempting to verify if evasion is occurring, and (3) taking enforcement action. To identify potential cases of evasion, CBP targets suspicious import activity, analyzes trends in import data, and follows up on allegations from external sources. If CBP identifies a potential case of evasion, it can use various techniques to attempt to verify whether evasion is occurring, such as asking importers for further information, auditing the records of importers suspected of evasion, and inspecting shipments arriving at ports of entry. If CBP is able to verify evasion, its options for taking enforcement action include (1) pursuing the collection of evaded duties, (2) imposing civil penalties, (3) conducting seizures, and (4) referring cases for criminal investigation. For example, between fiscal years 2007 to 2011, CBP assessed civil penalties totaling about $208 million against importers evading AD/CV duties. Two types of factors affect CBPs efforts to detect and deter AD/CV duty evasion. First, CBP faces several external challenges in attempting to gather conclusive evidence of evasion and take enforcement action against parties evading duties. These challenges include (1) the inherent difficulty of verifying evasion conducted through clandestine means; (2) limited access to evidence of evasion located in foreign countries; (3) the highly specific and sometimes complex nature of products subject to AD/CV duties; (4) the ease of becoming an importer of record, which evaders can exploit; and (5) the limited circumstances under which CBP can seize goods evading AD/CV duties. Second, gaps in information sharing also affect CBP efforts. Although communication between CBP and the Department of Commerce (Commerce) has improved, CBP lacks information from Commerce that would enable it to better plan its workload and help mitigate the administrative burden it faces in processing AD/CV dutiesan effort that diminishes its resources available to address evasion. Additionally, CBP has encouraged the use of larger bond amounts to protect AD/CV duty revenue from the risk of evasion, but CBP has neither a policy nor a mechanism in place for a port requiring a larger bond to share this information with other ports in case an importer withdraws its shipment and attempts to make entry at another port to avoid the higher bond amount. While CBP has made some performance management improvements, it does not systematically track or report key outcome information that CBP leadership and Congress could use to assess and improve CBPs efforts to deter and detect AC/CV duty evasion. First, CBP cannot readily produce key data, such as the number of confirmed cases of evasion, which it could use to better inform and manage its efforts. Second, CBP does not consistently track or report on the outcomes of allegations of evasion it receives from third parties. As GAO reported in March 2011, the Government Performance and Results Modernization Act of 2010 underscores the importance of ensuring that performance information will be both useful and used in decision making. Without improved tracking and reporting, agency leadership, Congress, and industry stakeholders will continue to have little information with which to oversee and evaluate CBPs efforts to detect and deter evasion of AD/CV duties.. To enhance CBPs efforts to address AD/CV duty evasion and facilitate oversight of these efforts, GAO makes several recommendations, including that CBP create a policy and a mechanism for information sharing among ports regarding the use of higher bond amounts and develop and implement a plan to track and report on these efforts. CBP and the Department of Commerce generally concurred with GAOs recommendations.
|
The PPA set stricter standards for appraisals and appraiser qualifications, established a penalty on appraisers who prepared appraisals that improperly supported deductions on income taxes, and lowered the threshold for determining certain misstatements of value on certain tax returns. In terms of noncash charitable contributions, the PPA defined a “qualified appraisal” as one that was conducted in accordance with generally accepted standards by a “qualified appraiser.” A “qualified appraiser” is defined as an individual who has earned an appraisal designation from a recognized professional appraiser organization or has met the minimum education and experience requirements set forth in the IRS regulations, and who regularly performs appraisals for compensation. For individuals, noncash charitable contributions are reported on Form 1040, U.S. Individual Tax Return, Schedule A, Itemized Deductions, and contributions of $500 or more must be itemized on Form 8283, Noncash Charitable Contributions. With certain exceptions, taxpayers claiming noncash contribution deductions of items or groups of similar items exceeding $5,000 must obtain qualified appraisals for the donated property, and report those on Form 8283, Section B (see app. II for more detail). The provisions concerning qualified appraisals do not apply to estate or gift taxes. For those taxes IRS simply requires taxpayers to support property values with an appraisal, which could be a written appraisal by a professional appraiser, but does not have to be in every case. Estate taxes are reported on Form 706 and gift taxes on Form 709 (see app. II for more detail on how appraisals may appear on these forms). In general, the higher the appraised value of a noncash charitable contribution, the higher the deduction a taxpayer might claim. Conversely, the lower the appraisal for property reported on gifts and estate taxes returns, the less tax must be paid. IRS has long had the authority to impose a penalty on a taxpayer for valuation misstatements included on a return, but prior to the PPA, IRS did not have specific authority to impose a penalty on the appraiser who prepared the valuation. The penalty rate has two levels related to the proportion of the misstatement. The PPA changed the thresholds for the two levels and increased the penalty rate for larger misstatements. The act also added an appraiser penalty, which applies to any person who prepared a misstated appraisal and knew or reasonably should have known would be used to support an individual income tax return. In 2007, TTCA made the appraiser penalty applicable for appraisals improperly supporting estate and gift tax returns. The responsibility for identifying cases with appraisals and staffing examinations on appraisals largely rests with IRS’s Small Business and Self-Employed (SB/SE) Division, which handles complex individual returns and gift and estate returns, and its Large Business and International (LB&I) Division, which handles partnership returns with assets greater than $10 million. Examination of appraisals typically will be conducted with field examination techniques. Appraiser penalty cases are audited separately from the taxpayer examination cases in which IRS may have first noticed improper appraisals. We estimated that more than 90 percent of estate tax returns filed in 2009 included assets, deductions, or exclusions of more than $50,000 in categories that IRS officials told us were likely to require the use of an appraiser. In contrast, less than 20 percent of gift tax returns and less than 1 percent for individual returns with noncash charitable contributions were likely to need an appraiser. For estate tax returns, we estimated that the aggregate value of property needing appraisers was at least $75 billion in 2009. This was greater than for gift or individual tax returns (see table 1 and tables 2 through 7 in app. III). For returns filed in 2007 through 2009, we found that gift tax filers who were likely to have needed an appraiser were at least twice as likely to have been audited than gift tax filers who were not likely to have needed an appraiser. Conversely, for estate tax returns we found no statistically significant evidence that the likely use of an appraiser was associated with a higher probability of being audited. Audit rates for estate tax returns (which ranged from 8.1 percent for returns filed in 2007 to 10.1 percent for those filed in 2009) are typically significantly higher than those for gift tax and individual income tax returns. For individual income tax returns for tax years 2005 through 2008, we could also not detect any statistically significant differences in audit rates based on the likeliness that a Form 8283 filer required a qualified appraisal. For most years, we found no statistically significant differences between the audit rates for taxpayers who claimed at least $5,000 worth of noncash deductions from Section B of Form 8283 and IRS’s reported audit rates for all individual taxpayers, when compared in broad income groups. (For 2007 returns, we found that the audit rate for high-income Section B filers was at least 1 percent higher than the rate for high-income taxpayers in general). We estimated that the rate of individual taxpayer audits that specifically included noncash contributions as an issue was 0.5 percent or less for tax year 2008 but as high as 3.7 percent for tax year 2006. The total amount of upwards adjustments in tax liabilities associated with appraisals issues (and agreed to by taxpayers) was less than $37 million for each year from 2006 to 2008. For 2005 the amount was between $67 million and $91 million. For most of the years we reviewed, the sizes of our subsamples of audits that specifically identified noncash contributions as being an issue were too small to yield useful information concerning that particular issue’s no- change rate. However, in the case of returns filed for tax year 2007 we were able to estimate that the no-change rate for noncash contribution issues was between 72 percent and 97 percent with a 95 percent level of confidence (see app. II, table 12). IRS officials said the contributions that some individual taxpayers report on their returns are made through partnerships or Subchapter S corporations and that those contributions may be reviewed in audits of those entities rather than in audits of the individuals’ returns. We reviewed data for all 121 partnership and S corporation audits involving noncash contributions that were referred to the Engineering Program (Engineering), a group within LB&I that staffs appraisal experts available to examiners for consultation, for assistance in calendar year 2010. We found that in 31 of those cases, the value of the contribution was identified as an audit issue. Separately, IRS officials told us that from 2007 through February 2012, 500 individual tax returns were adjusted as a result of SB/SE audits of deductions relating to conservation easements claimed by these types of entities. The total amount of PPA penalties assessed in the six existing cases where appraiser penalties have been assessed was $159,713, with the penalty amounts ranging from several hundred dollars to tens of thousands of dollars. An IRS official said that the agency has not abated any of the penalties. IRS provided several reasons for the first PPA penalties not being levied until several years after enactment. First, given that the PPA penalties apply to appraisals accompanying returns filed after August 17, 2006, IRS officials said that they estimated that returns containing appraisals that could be subject to the penalty would not enter the audit stream for a few years. Therefore, IRS targeted 2009 to issue guidance and make computer system changes. IRS posted a notice about the legislation in 2006 and on August 18, 2009, issued an interim guidance memorandum as initial instructions for examiners on the application of the appraiser penalty. This guidance, which was developed by SB/SE and accepted by the other divisions of IRS, included procedures to make the penalty accessible by examiners and deliver the appropriate appraiser penalty assessment notices to appraisers. Second, IRS officials said that they had to create the computer infrastructure for examiners to apply and record penalties, draft and approve the form letters to be sent out to those assessed the penalty, and prepare the guidance for IRS examiners in the time between the PPA’s passage and the issuance of final guidance on the appraiser penalty. A third factor IRS cited was that its examiners typically conclude a case against a taxpayer before pursuing a case against the appraiser. Figure 1 shows the sequence of events from passage of the PPA to the establishment of formal guidance for examiners in the IRM. Application of the appraiser penalty may increase as examiners become more familiar with the process of initiating these investigations, according to IRS examination officials. They said that traditionally, examiners have used other penalties to address appraiser noncompliance. Prior to the PPA, IRS could assess penalties on appraisers for promoting abusive tax shelters and aiding and abetting tax noncompliance under other sections of the Internal Revenue Code. IRS officials said that appraisal issues have never been significant in penalty cases compared to other promoter and preparer violations—officials estimated that maybe 10 or 15 out of every 1,000 penalty cases involved appraisals. IRS’s case examination planning and guidance for SB/SE and LB&I field exams does not explicitly target appraisals, but current selection methods may lead to cases with appraisals indirectly. Examination planners in both SB/SE and LB&I use database tools, such as the Audit Information Management System and the Examination Returns Control System, to manage cases for examination, but these databases do not contain variables that would enable exam planning or high-level case selection and staffing based specifically on appraisals. Consequently, when choosing returns to audit, IRS does not know whether any particular return has a related appraisal. For similar reasons, gift and estate returns also are not targeted for specific appraisal issues. Similarly, IRS does not staff examinations based on appraisals. The examiners who lead these teams are generalists and do not necessarily have specific expertise relating to appraisal techniques. The presence of an appraisal as a potential audit issue does not affect how IRS assigns these generalists to specific cases. Individual noncash contributions, gift, and estate tax returns with appraisals all may be selected for examination indirectly because of characteristics that are correlated with appraisals. For example, SB/SE field audit priorities focus on high wealth individuals, who are more likely to make the kinds of large noncash contributions, give large gifts, or have large estates that would include items requiring appraisals. In tax year 2008 individuals with adjusted gross incomes of $200,000 or more accounted for over 75 percent of the noncash contributions of real estate, easements, art, and collectibles reported on Forms 8283, even though they represented less than 15 percent of individuals filing that form. Other SB/SE priorities that may indirectly involve appraisals include abusive transactions and special examination projects. Like SB/SE, LB&I does not select cases based on the inclusion of appraisals. LB&I devotes resources to priorities set in annual examination plans and then allocates the remaining available staff to other work. IRS has targeted noncash contributions for audits, which could include reviews of appraisals, but the targeting is not based on appraisals. IRS selects a portion of its examination inventory using a computerized scoring system called the Discriminant Index Function. Within this system, the presence of unusual, large, or questionable contributions is one of numerous factors that can increase the probability that a return will be selected for audit. IRS also has a matching program that compares Form 8283 with Form 8282, which includes the amounts donee organizations report to have received when they dispose of contributed assets. Mismatches between these returns can lead to an examination. In addition, one of IRS’s past special examination projects specifically targeted deductions relating to façade easements in SB/SE’s North Atlantic office, which could have involved reviews of appraisals on the easements’ values. The project, which ran from 2008 to 2010, covered 152 tax returns. As of April 2012, IRS said it has closed 60 cases with an average recommended adjustment per return of $252,067. Although IRS does not select returns for examination based on appraisals, IRS case-review guidance may lead examiners to detect appraisal issues once a return has been selected for review for other reasons. Different guidance applies to examiners reviewing individual, estate, and gift returns. The guidance focuses examiner attention on a number of issues involving appraisals and related issues, including checking that taxpayers obtained qualified appraisals, if required; verifying that the appraised values of noncash contributions exceeding $5,000 are listed in Form 8283, Section B; ensuring that taxpayers attach qualified appraisals for certain assets, such as easements registered in historic districts; auditing elements of noncash contributions that seem questionable, such as missing, incomplete or altered forms and documents, and contributions that seem excessively large compared to reported taxpayer income; reviewing any large, unusual or questionable items relating to noncash charitable contributions; and reviewing the appraisal supporting donations over a certain amount for completeness and issues such as questionable authenticity and appraiser judgment. IRS officials said that it was up to the judgment of the individual examiner to decide whether the potential additional tax to be gained from investigating appraisals in detail warrants the investment of audit resources. The agency does not require documentation of such judgments when the issue has not initially identified for examination. Our review of 80 examination files from tax year 2008 with $5,000 or more in noncash charitable contributions showed that Forms 8283 were incorrectly filled out in 17 cases but the examiner made no change. In 10 of those cases, the examiner did not leave a record explaining why no further action was taken; therefore, we could not determine whether the examiners made a conscious decision not to follow up on the incorrect Forms 8283. In the other seven cases where the Form 8283 was incorrect and the examiner left a record, the taxpayers supplied additional information during the audit that satisfied the examiners. This shows that taxpayers can be compliant with the appraisal rules even when they do not fill out Form 8283 correctly. We found no obvious incorrectly reported Forms 8283 in the other 63 cases. Our file review also suggests that, even in cases where examiners do change noncash contribution deductions, few of those changes are due to problems with appraisals. As discussed previously, for tax year 2007, examiners made no changes to such deductions in the majority of cases in which noncash contributions were identified as a potential problem to review. Our file review showed that in only a small percentage of the cases in which noncash contributions were changed was the change made due to a problem with an appraisal. These facts suggest that IRS is not finding widespread noncompliance with appraisals for noncash contributions and the potential revenue yield from auditing appraisals of lower-value items is likely to be small. At the same time, the number of taxpayers who are required to pay for appraisals of items with relatively low values (in real, inflation-adjusted terms) has likely increased because the $5,000 threshold has not been changed since Congress set it in 1984. The threshold would be worth more than $11,000 if adjusted to 2012 dollars. Once IRS selects estate and gift returns for examination, classifiers review the returns to identify issues to be audited closely. IRS guidance instructs classifiers to review returns in their entirety, including a review of any appraisals. IRS estate tax return examiners and managers said that estate tax returns can contain voluminous documentation and examiners do not have enough time to go through each appraisal and audit every possible valuation issue. In cases where valuations are an issue for either estate and gift taxes, examiners review the appraisals attached to the schedules selected for examination and make referrals to IRS appraisal experts in LB&I’s Engineering Department, as needed. If appraisals are not attached and should be, examiners contact taxpayers to request these. The value of some assets, such as publicly traded stocks, can be determined without complex methodologies, using public market quotations. Examiners also check appraised values using various tools depending on the type of asset. For example, examiners may use “blue books” or other resale guides for personal property, and may use various computer programs that have comparable-sales values for real estate. IRS employs appraisal experts in two areas, Engineering and Art Appraisal Services (AAS), which provide valuation assistance to examination teams in determining an appraisal’s legitimacy. Engineering, as previously mentioned, employs staff appraisers who assist with the examination of complex appraisal issues, and AAS, part of the Appeals Office (Appeals), provides assistance specifically for appraisals of art. Under current IRS guidance, examiners should refer cases with appraisals above certain thresholds to Engineering and AAS appraisers for assistance. Estate and gift tax examiners must at a minimum consult with Engineering for assistance in determining the accuracy of appraised values for examinations where the focus includes appraisal issues. IRS guidance also encourages examiners to request the assistance of Engineering and AAS experts for cases not requiring mandatory referral, if valuation assistance is appropriate. Our review of examination case files found that examiners made referrals in accordance with the guidance. In addition to their internal sources of appraisal expertise, IRS examination teams also may seek outside contracts with professional appraisal experts to assist in reviewing taxpayers’ property valuations. IRS entered into 23 contracts involving cases of noncash contributions, gift or estate taxes from fiscal years 2005 to 2011. The total amount awarded for the 23 contracts on noncash contributions, gift and estate taxes was $1.1 million, an average of $46,000 per contract. An IRS procurement official said that each contract may cover appraisal services for multiple properties. IRS officials said that it is more economical to hire outside appraisal experts who have expertise with certain types of assets, such as easements, than to have many in-house experts in highly specialized areas because the appraisal caseload in such areas would not support full-time staff. IRS policy requires examination teams to consider the availability and expertise of in-house appraisers prior to requesting the assistance of outside experts. In our previous work on human capital management, we listed factors for ensuring high-performance human capital management and ensuring high program quality. Standards from our past work that are relevant to our review of IRS’s appraiser qualifications include having a process suitable to hire qualified staff to audit appraisals, including specifically requiring appraisal expertise as a qualification; formally training and educating its staff to keep up with job duties and individual developmental needs relevant to evaluating or auditing appraisals; and ensuring that staff are performing quality work during their examinations of appraisals, including a quality review system that covers appraisal skills and management oversight that evaluates appraisal skills. Engineering fully followed GAO’s three standards for ensuring qualified staff; however, while AAS fully met the hiring standard, it did not meet the other two, creating risk that staff may not be performing quality work. Engineering: The job description for appraisers in Engineering specifically requires applicants to have valuation and appraisal skills as a qualification, meeting the hiring standard. For example, the description says appraisers must have a “mastery of appraisal principles and concepts needed to serve as a technical authority.” The hiring process then works through a combination of automated scoring and personal review suitable for hiring appraisers. Announced appraiser positions follow the Office of Personnel Management category for appraisers, series GS-1171. An automated scoring system called Career Connector assesses applicants’ qualifications. IRS then hires from among the qualified applicants. AAS: The qualifications and hiring process for appraisers for AAS is similar to the procedure used by the Engineering and thus, AAS meets the standard. Engineering: IRS maintains a formal training program for its Engineering appraisers that starts with new hires and continues with advanced, specialized training, including training on appraisal skills to meet the GAO training standard. The IRM specifies two appraisal organizations—the American Society of Appraisers and the Appraisal Institute—that may acceptable continuing education. LB&I has brought in trainers for some courses and maintains a budget for engineers to seek outside training, as well. Internal engineering training documents also state that engineers may develop a learning plan that includes 40 hours of training every year. AAS: Appeals requires 24 to 40 hours of continuing education per year for its employees, including its AAS staff, but it does not explicitly identify appraisal skills as a subject for training, preventing it from meeting the standard. Some AAS staff members have attended conferences on visual arts and the law and the American Society of Appraisers National Conference, which appear relevant to their work. However, in contrast to the standard of providing training relevant to specific job duties, the Appeals training guidance does not mention any relevant skills that appraisers must maintain, leaving the possibility that appraisers are not keeping up their skills and not evaluating art appraisals as well as they could. AAS staff have discussed a more specific training program for AAS new hires. Engineering: LB&I meets the GAO standard for monitoring performance quality with respect to its engineering group by subjecting its work to a quality review system and exercising management oversight of appraisal skills. LB&I uses an audit quality assurance system as part its LB&I Quality Measurement System. Having such a system enables IRS to improve procedures and issue development. In LB&I’s quality assurance system, engineers are measured on four technical standards. The four technical standards focus on the following subjects planning; inspecting and fact finding; development, proposal, and resolution of issues; and workpapers and reports. Each of the technical standards includes a list of specific criteria. The correct auditing of an appraisal is not specifically covered by the standards. However, to the extent that an examination involved an appraisal, an engineer’s work on the case would be covered under these four standards. For example, IRM guidance suggests procedures that an engineer should gather facts. Such a procedure would apply to gathering facts on an appraisal and would be checked during a quality review. To conduct the quality assurance reviews, LB&I randomly selects coordinated industry cases (CIC) and industry cases (IC). The results are reported in quarterly reports. In the first quarter of fiscal year 2011, five CICs covered engineers and three ICs covered engineers. None of the problems directly involved reviews of appraisal issues, but the assessments found problems relating to two of the technical standards on CICs and problems relating to three standards for ICs. On a more routine basis, team managers are required to review case performance, including technical aspects of an engineer’s work. AAS: Appeals operates a case-review program called the Appeals Quality Measurement System (AQMS); however, most of the cases that AAS works are not Appeals cases and are not covered by this system. Therefore, IRS does not meet the GAO quality review standard with respect to AAS. Given that AAS is involved in only a small percentage of the cases that are appealed, IRS’s Director of Tax Policy and Valuation said that she has been considering whether to supplement AQMS’s random sample with a periodic, targeted review of AAS cases. She said IRS’s goal is to start the reviews in fiscal year 2013. Aside from AQMS, IRS guidance encourages examination offices to provide feedback on AAS’s performance that “would be beneficial to the viability of this program.” The AAS manager also reviews all cases that AAS completes before they are issued. However, there is no group-wide summation or tracking of these reviews or assurance that AAS staff are performing well specifically in regard to their appraisal work, as stipulated by the standard. Without systematic evaluation, erosion of the quality of AAS’s work could occur unobserved. Appraisers play a large role in the amount of tax reported on estate returns, but have less pronounced effects on gift and individual tax returns. Although IRS does not specifically target tax returns that involve appraisals, the policies and procedures that IRS has in place to audit estate, gift, and individual income tax returns ensures some coverage of returns that do involve appraisals. For example, IRS already gives priority to higher-income individual returns in the examination selection process, and such returns are more likely to have appraisals supporting noncash contributions than the general population of returns. There are two areas where changes might lead to reduced taxpayer burden or improved agency performance relating to appraisals. First, the fact that the $5,000 threshold at which taxpayers are required to obtain qualified appraisals for noncash contributions has remained unchanged for more than 25 years means that some contributors today must hire appraisers to value property that would not have needed appraisals in the mid-1980s, when the threshold was adopted. The high no-change rate that we found through our data analysis and our file review indicates that IRS examiners find relatively little noncompliance relating to appraisals for noncash contributions. This low rate of detected noncompliance implies that very little revenue is gained by auditing appraisals of assets worth less than $10,000. Consequently, there seems to be little risk in adjusting the threshold for price inflation to better reflect the level Congress initially believed was appropriate to deter noncompliance. This adjustment would reduce the compliance burdens for contributors of such property and, if similar adjustments were made periodically in the future, would serve to maintain consistent treatment of taxpayers over time. Second, the lack of appraisal training requirements for AAS appraisers and the lack of a comprehensive quality control process for AAS cases put the quality of potentially high-value appraisal cases involving art at risk. To better ensure the quality of IRS’s examination of appraisal issues, the Commissioner of Internal Revenue should take the following two actions: ensure that a more comprehensive quality review system for work performed by AAS staff is implemented and develop more specific and documented appraisal training requirements for AAS staff, as LB&I has done for engineers. To reduce the compliance burden on taxpayers making noncash contributions, Congress should consider raising the threshold at which taxpayers are required to have qualified appraisals for a particular contribution. Raising the threshold and giving IRS the authority to adjust this value for inflation in the future would maintain the consistent treatment of taxpayers over time. We requested written comments from the Commissioner of Internal Revenue and received a letter from the IRS Deputy Commissioner for Services and Enforcement on June 1, 2012 (which is reprinted in app. IV). IRS agreed with our recommendations. First, it agreed that a more comprehensive quality review process is appropriate for AAS, adding that IRS’s goal is to supplement AQMS’s random sample with a periodic, targeted review of AAS cases starting in fiscal year 2013. Additionally, IRS agreed that more specific appraisal training should be provided, adding that it is finalizing a more specific training curriculum for AAS appraisers. IRS also provided technical comments, which we incorporated into our draft. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. In addition, the report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix provides further details on the methodologies that we used to estimate (1) the extent to which appraisals are an issue for estate and gift tax returns and for returns of individuals making noncash charitable contributions and (2) the rates at which the Internal Revenue Service (IRS) audits returns with potential appraisal issues. It also explains how we identified cases for our file review and how we obtained data on IRS’s use of contractors. The purposes of this section are to document (1) how we have placed the various assets, exclusions, and deductions reported on Forms 706 into three groups based on the likeliness that a substantial appraisal was needed to value a particular item and (2) how we identified specific estate tax returns as being likely to have involved a substantial appraisal. Data for this analysis came from the Statistics of Income (SOI) estate tax samples for filing years 2007 through 2009 (the latest years available at the time of our analysis). After identifying these various subgroups of taxpayers, we used their taxpayer identification numbers to extract data from the Enforcement Revenue Information System (ERIS) regarding any examinations they underwent for the tax years included in our scope. We converted all dollar amounts into 2012 dollars by multiplying them by the ratio of the 2012 index value for the gross domestic product (GDP) price deflator over the index value for the applicable year of death. We identified “returns with over $50,000 in any asset, deduction, or exclusion category likely to involve an appraiser” as those cases with more than $50,000 (in absolute value) in any category from the list of property likely to involve an appraiser above. We identified “returns with no more than $50,000 in every asset, deduction, or exclusion category likely to involve an appraiser” as those cases with no more than $50,000 (in absolute value) in any category listed in either the first or third lists of property above. We identified taxable estates as those with positive values for net estate tax. We defined the “buffer” before an estate would become taxable as the amount by which total gross estate less exclusion would have to increase or total deductions would have to decrease (holding credits constant) before an estate would become taxable. In other words, if a taxpayer has a buffer of $100,000, it would take some combination of increases in asset valuations or decreases in the value of exclusions and deductions summing to more than $100,000 before any exam adjustments would result in a tax increase. We asked IRS to extract selected data from the ERIS database for the sample of estate taxpayers we identified from the SOI data. We counted any case that had a match in ERIS as having been audited. The methodology that we used for the gift tax is similar to the one that we used above for the estate tax. The principal differences are that, first, we use a lower dollar limit ($25,000 rather than $50,000) in some of our comparisons because the size of the average gift is significantly smaller than the size of the average estate, and, second, we do not distinguish between taxable and nontaxable gift tax returns. (Many gift tax returns are not taxable; however, the amounts reported on these returns can ultimately affect the amounts of tax paid on estate tax returns.) The data used for this analysis come from SOI’s sample of gift tax returns filed in 2007 through 2009, the latest available at the time of our analysis. The property and deduction categories recorded from these returns are slightly different from those recorded from the estate tax returns. We converted all dollar amounts into 2012 dollars by multiplying them by the ratio of the 2012 index value for the GDP price deflator over the index value for the applicable gift year. To estimate the extent to which appraisals are used to support noncash contributions, we first reviewed IRS requirements for recording appraisals on Form 8283, Noncash Charitable Contributions. Next, we used data from SOI’s annual studies of noncash contributions relating to the amounts and types of contributions that taxpayers reported on various parts of Form 8283 from tax years 2005 through 2008 (the latest years available at the time of our analysis) to (1) identify an upper bound for the number of taxpayers who potentially required qualified appraisals to support noncash contribution deductions claimed on Form 1040, Schedule A, Line 17, and (2) determine how many Form 8283 filers we could identify as either being likely to require a qualified appraisal or unlikely to require one. Filtering the SOI data involved the following steps. Qualified appraisals are not required for any donations reported in Section A of Form 8283; therefore, we excluded Form 8283 filers who did not report any contributions in Section B of the form. Furthermore, if a taxpayer reports a donation in Section B but does not carry any amount to Schedule A of the Form 1040, the taxpayer is not actually claiming any deduction for that donation. Consequently, we excluded all filers that did not have a positive value for the amount carried from Section B to Schedule A for any donation. Taxpayers should not need a qualified appraisal if either of the following two conditions is met: the total amount moved from Section B to Schedule A for all donations is less than or equal to $5,000, or the only type of donation reported in Section B is intellectual property. We removed all such cases. SOI assigns each donation reported in Form 8283 Section B to one of 19 different property-type categories. Donations in five of these categories (real estate except conservation easements, land, conservation easements, façade easements, and art and collectibles) need qualified appraisals unless they come from a business’s stock in trade, inventory, or property held primarily for sale to customers in the ordinary course of its trade or business. We believe that this exception is not likely to apply to properties in the first four of these five categories. To identify donations of art and collectibles that potentially could have qualified for the exception, we did the following: 1. For those filers who made a donation in this category, we used data from SOI’s 1040 tax files to determine whether they had a Schedule C business in any of the following listed industry: Art dealers Beverage and tobacco product manufacturing (wineries are Independent artists Jewelry stores Jewelry, watch, etc., wholesalers included in this category) Beer, wine, and liquor stores Beer, wine and distilled spirits wholesalers 2. We assigned any of these filers that had at least one Schedule C in one of these industries a code that indicates that they potentially had an exception. Donations in two property categories, corporate stock and mutual funds, are excepted from needing qualified appraisals if they have readily available market quotations or are less than $10,000. This exception can also apply to securities, such as bonds, that are reported in the “other securities and investments” category. We had no way of reliably identifying which of the securities in these categories had readily available quotations, so we did not attempt to identify individual donations as being excepted or not. However, within the “securities and other investments” category, we did identify bond donations using the taxpayers’ descriptions of their donations on line 5(a) and assigned them a code indicating that they were potentially excepted, which distinguished them from other donations in that category that were not excepted. Donations in the categories that do not involve securities may qualify for the inventory exceptions, and vehicle donations can be excepted under additional special conditions. Aside from the “other and unknown” category, donations in the nonsecurities categories, taken individually, account for very small shares of the total value of noncash contribution deductions. The “other and unknown” category accounts for about 6 percent to about 9 percent of the total value of deductions, depending on the year. In some of these cases the type of property donated is truly unknown because the description simply indicates that the donation was made by a partnership owned by the taxpayer. The remaining donations are of such variety that it would be difficult to apply the approach that we have set out above for identifying donations that potentially qualified for the inventory exception. After we completed all of steps described above, we grouped Form 8283 filers into the following categories: 1. Filers who had a donation in at least one of the following property categories: Real estate (except conservation easements) Land Conservation easements Façade easements Other securities and investments (excluding donations of bonds) Art and collectibles (excluding donations identified as potentially qualifying for the inventory exception) 2. Filers who had donations only in one of the following property categories: Corporate stock Mutual funds Bonds 3. All remaining filers with Section B donations that did not meet the criteria for the first two categories. Filers in the first category, on average, are more likely to have appraisals than those in the other two categories; filers in the second category, on average, are less likely to have appraisals than those in the other two categories. We converted all dollar amounts into 2012 dollars by multiplying them by the ratio of the 2012 index value for the GDP price deflator over the index value for the applicable tax year. We asked IRS to extract selected data from the Examination Operational Automation Database (EOAD) for the sample of taxpayers that SOI had identified as having made noncash contributions. We counted any case that had a match in EOAD as having been audited. We identified cases as having had noncash contributions raised as an audit issue based on the form and line codes plus the Standard Accounting Identification Number recorded in that database. Given the limitations of the issue coding in the database, we can report on adjustments relating to noncash contributions but not specifically relating to the appraisals of those contributions. We identified audits in which noncash contributions were an issue as having been no-change cases if the agreed adjustment amount for that issue was zero. We used the results of our data matching for tax year 2008 to identify the cases for which we requested examination files to review. We reviewed the cases using a data collection instrument at IRS’s New Carrollton, Maryland, office. We conducted this performance audit from October 2010 through June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. IRS has established rules and procedures for taxpayers to follow when using appraisals to support noncash charitable contributions, estate, and gift tax claims. Within certain conditions, taxpayers must use qualified prepared appraisals to support noncash-contribution deductions on Form 1040, Schedule A. Taxpayers list the total value of noncash contributions on Schedule A, line 17. Taxpayers claiming noncash charitable contributions over $500 must submit Form 8283, Noncash Charitable Contributions, which has two sections. In Section A of the form, taxpayers report noncash contributions that do not require qualified appraisals. Such contributions include items, or groups of similar items, with a claimed deduction of $5,000 or less, and securities of any value with readily available market quotations. For contributions more than $5,000, taxpayers must have an appraisal done and fill out Section B of Form 8283. Exceptions to the $5,000 threshold include nonpublicly traded stock of $10,000 or less; vehicles if the deduction is limited to gross proceeds from sale; intellectual property; certain securities considered to have market quotations readily available; inventory and property donated by corporations that are “qualified contributions” for the care of the ill or infants; and stock in trade, inventory, or property held primarily for sale to customers. Figure 2 shows the section of Form 8283 where taxpayers provide descriptions and appraised values of donated property valued at more than $5,000. Taxpayers are required to attach the written appraisals to the return only for contributions of art valued at $20,000 or more, any deduction of more than $500,000, contributions of easements on buildings in historic districts, and deductions of more than $500 for clothing and household items not in good use condition. Charitable organizations that receive contributions listed in Section B of Form 8283 generally must report them to IRS on Form 8282. IRS requires that taxpayers support the claimed value of property in estate transfers with an appraisal, which could be a written appraisal by a professional appraiser, but does not have to be in every situation. The body of law covering qualified appraisals for noncash charitable contributions does not apply to estate or gift taxes. Taxpayers may owe taxes on the property of an estate transferred at death, if the gross value of the estate exceeds annually established exclusion levels. The exclusion levels for the estates of those who died in certain recent years were $1.5 million for 2004 to 2005, $2 million for 2006 to 2008, $3.5 million for 2009, and $5 million for 2010 to 2012. Following taxpayers’ deaths, appointed estate executors file estate returns on Form 706, United States Estate (and Generation-Skipping Transfer) Tax Return if the estate is worth more than the annual exclusion. Appointed executors must include explanation or documentation detailing how the value of estate property was determined. Written appraisals prepared by professional appraisers are one of the acceptable valuation methods, and appropriate documentation will vary depending on the type of asset. However, written appraisals are required to support the value of real property claimed in Schedule A-1, artwork or collectibles worth more than $3,000 individually or more than $10,000 collectively claimed in Schedule F, and conservation easement exclusions reported in Schedule U. Taxpayers may be subject to taxes on property transferred as gifts and must provide valuation support for the property’s claimed value. Gifts may be taxable if their value exceeds annually established exclusion values. The exclusion levels for gift transfers in recent years were $10,000 from 1998 to 2001, $11,000 from 2002 to 2005, $12,000 from 2006 to 2008, and $13,000 from 2009 to the present. Gift donors file gift returns on Form 709, United States Gift (and Generation-Skipping Transfer) Tax Return, if gifts exceed the exclusion value. Donors must list taxable gifts in Schedule A and include one of a number of acceptable valuation documents, among them a written appraisal prepared by a professional appraiser, or an explanation of how the value was determined. For calendar year 2007, IRS recorded 257,485 donors who transferred $45.2 billion in gifts. Less than 4 percent of all gift returns were taxable, accounting for $2.8 billion in gift taxes. Three types of assets—cash, stock, and real estate—accounted for 87 percent of all gifts. For noncash charitable contributions, the Pension Protection Act (PPA) of 2006 lowered the threshold for substantial valuation misstatements from 200 percent of the correct valuation to 150 percent. Substantial valuation misstatements subject the taxpayer to a penalty equal to 20 percent of the underpayment attributable to the misstatement. For estate and gift property, PPA increased the threshold for substantial valuation understatements from 50 percent to 65 percent. Gross valuation misstatements on any return are subject to an increased penalty equal to 40 percent of the portion of the underpayment attributable to the misstatement. For noncash charitable contributions, PPA lowered the threshold for gross valuation misstatement from 400 percent of the correct valuation to 200 percent. For estate or gift property, PPA raised the threshold for gross valuation understatements from 25 percent to 40 percent of the supported value. Tables 2 through 12 contain data on appraisal usage and IRS’s appraisal enforcement. In addition to the contact named above, James Wozny, Assistant Director; Anthony Bova; Michael Brostek; Sara Daleski; Eric Gorman; Suzanne Heimbach; Karen O’Conor; Melanie Papasian; Albert Sim; Sabrina Streagle; Karen Villafana; and William Woods made key contributions to this report.
|
Misstated appraisals used to support tax returns have long caused concern. In 2006, Congress adopted the Pension Protection Act, which changed the criterion for when appraisals are considered to be substantiallymisstated and created a penalty for improper appraiser practices and qualifications for appraisers with respect to noncash charitable deductions. The Tax Technical Corrections Act of 2007 extended the penalty for misstated appraisals to estate and gift taxes. Among its objectives, GAO was asked to (1) describe the extent to which individual, estate, and gift tax returns are likely to involve an appraiser and the extent to which IRS audits them; (2) describe how IRS selects returns likely to involve appraisals for compliance examinations, and assess whether the current appraisal threshold is useful; and (3) assess IRS procedures for ensuring that itsappraisal experts are qualified. To accomplish these objectives, GAO analyzed IRS data, reviewed IRS guidance, and interviewed appropriate IRS officials. Appraisers most prominent role relative to the three types of tax returns GAO studied is in the valuation of estates. In the most recent years for which GAO had data, appraisers were likely involved in the valuation of property worth from $75 billion to $167 billion reported on estate tax returns in 2009. In contrast, less than $17 billion worth of gifts in 2009 and less than $10 billion in noncash contributions in 2008 likely involved an appraiser. Gift tax returns that likely used appraisers had higher audit rates than gift returns that were unlikely to have appraisers. The use of appraisers was not associated with higher audit rates for estate tax returns and individual returns with noncash contributions. The Internal Revenue Services (IRS) procedures for selecting returns to audit do not specifically target noncash contributions or gift or estate tax returns supported by appraisals. Nevertheless, returns with appraisals do get included in the population of audited returns because certain types of returns on which IRS does focus, such as higher-income ones, are also the most likely ones to have noncash charitable contributions that require appraisals. The current appraisal threshold for certain contributions over $5,000 has existed since 1984. The absence of an inflation adjustment over the past 25 years means that many contributors who pay for appraisals would not have needed to do so when the current threshold was first introduced. IRS seldom takes issue with appraisals for noncash contributions. Consequently, there seems to be little risk in Congress raising the $5,000 dollar threshold. IRS appraisal experts in one division met standards for ensuring that they were qualified. However, art appraisal experts in another division are not subject to either a comprehensive quality review program or continuing education requirements specific to appraising art. The lack of comprehensive quality reviews and mission-specific continuing education requirements could make the art appraisers less effective than they otherwise would be. GAO recommends that IRS develop a comprehensive quality review program for Art Appraisal Services (AAS) and establish appraisal training requirements specifically for AAS staff. Congress also should consider raising the dollar threshold at which qualified appraisals are required for noncash contributions to reflect inflation. IRS agreed with our recommendations.
|
Under NEPA, federal agencies generally are to evaluate the likely environmental effects of projects they are proposing by preparing either an Environmental Assessment (EA) or a more detailed Environmental Impact Statement (EIS), assuming no Categorical Exclusion (CE) applies. Agencies may prepare an EA to determine whether a proposed project is expected to have a potentially significant impact on the human environment. If prior to or during the development of an EA, the agency determines that the project may cause significant environmental impacts, an EIS should be prepared. However, if the agency, in its EA, determines there are no significant impacts from the proposed project or action, then it is to prepare a document—a Finding of No Significant Impact—that presents the reasons why the agency has concluded that no significant environmental impacts will occur if the project is implemented. An EIS is a more detailed statement than an EA, and NEPA implementing regulations specify requirements and procedures—such as providing the public with an opportunity to comment on the draft document—applicable to the EIS process that are not mandated for EAs. If a proposed project fits within a category of activities that an agency has already determined normally does not have the potential for significant environmental impacts—a CE—and the agency has established that category of activities in its NEPA implementing procedures, then it generally need not prepare an EA or EIS. The agency may instead approve projects that fit within the relevant category by using one of its established CEs. For example, the Bureau of Land Management (BLM) within the Department of the Interior (Interior) has CEs in place for numerous types of activities, such as constructing nesting platforms for wild birds and constructing snow fences for safety. For a project to be approved using a CE, the agency must determine whether any extraordinary circumstances exist in which a normally excluded action may have a significant effect. Figure 1 illustrates the general process for implementing NEPA requirements. Private individuals or companies may become involved in the NEPA process when a project they are developing needs a permit or other authorization from a federal agency to proceed, such as when the project involves federal land. For example, a company may apply for such a permit in constructing a pipeline crossing federal lands; in that case, the agency that is being asked to issue the permit must evaluate the potential environmental effects of constructing the pipeline under NEPA. The private company or developer may in some cases provide environmental analyses and documentation or enter into an agreement with an agency to pay a contractor for the preparation of environmental analyses and documents, but the agency remains ultimately responsible for the scope and content of the analyses under NEPA. The Council on Environmental Quality (CEQ) within the Executive Office of the President oversees the implementation of NEPA, reviews and approves federal agency NEPA procedures, and issues regulations and guidance documents that govern and guide federal agencies’ interpretation and implementation of NEPA. The Environmental Protection Agency (EPA) also plays two key roles in other agencies’ NEPA processes. First, EPA reviews and publicly comments on the adequacy of each draft EIS and the environmental impacts of the proposed actions reviewed in the EIS. If EPA determines that the action is environmentally unsatisfactory, it is required by law to refer the matter to CEQ. Second, EPA maintains a national EIS filing system. Federal entities must publish in the Federal Register a Notice of Intent to prepare an EIS and file their draft and final EISs with EPA, which publishes weekly notices in the Federal Register listing EISs available for public review and comment. CEQ’s regulations implementing NEPA require federal agencies to solicit public comment on draft EISs. When the public comment period is finished, the agency proposing to carry out or permitting a project is to analyze comments, conduct further analysis as necessary, and prepare the final EIS. In the final EIS, the agency is to respond to the substantive comments received from other government agencies and the public. Sometimes a federal agency must prepare a supplemental analysis to either a draft or final EIS if it makes substantial changes in the proposed action that are relevant to environmental concerns, or if there are significant new circumstances or information relevant to environmental concerns. Further, in certain circumstances, agencies may—through “incorporation by reference,” “adoption,” or “tiering”—use another analysis to meet some or, in the case of adoption, all of the environmental review requirements of NEPA. Unlike other environmental statutes, such as the Clean Water Act or the Clean Air Act, no individual agency has enforcement authority with regard to NEPA’s implementation. This absence of enforcement authority is sometimes cited as the reason that litigation has been chosen as an avenue by individuals and groups that disagree with how an agency meets NEPA requirements for a given project. For example, a group may allege that an EIS is inadequate, or that the environmental impacts of an action will in fact be significant when an agency has determined they are not. Critics of NEPA have stated that those who disapprove of a federal project will use NEPA as the basis for litigation to delay or halt that project. Others argue that litigation only results when agencies do not comply with NEPA’s procedural requirements. Governmentwide data on the number and type of most NEPA analyses are not readily available, as data collection efforts vary by agency (see app. II for a summary of federal NEPA data collection efforts). Agencies do not routinely track the number of EAs or CEs, but CEQ estimates that EAs and CEs comprise most NEPA analyses. EPA publishes and maintains governmentwide information on EISs. Many agencies do not routinely track the number of EAs or CEs. However, based on information provided to CEQ by federal agencies, CEQ estimates that about 95 percent of NEPA analyses are CEs, less than 5 percent are EAs, and less than 1 percent are EISs. These estimates were consistent with the information collected on projects funded by the American Recovery and Reinvestment Act of 2009 (Recovery Act). Projects requiring an EIS are a small portion of all projects but are likely to be high-profile, complex, and expensive. As the Congressional Research Service (CRS) noted in its 2011 report on NEPA, determining the total number of federal actions subject to NEPA is difficult, since most agencies track only the number of actions requiring an EIS. The percentages of EISs, EAs, and CEs vary by agency because of differences in project type and agency mission. For example, the Department of Energy (DOE) reported that 95 percent of its 9,060 NEPA analyses from fiscal year 2008 to fiscal year 2012 were CEs, 2.6 percent were EAs, and 2.4 percent were EISs or supplement analyses. Further, in June 2012, we reported that the vast majority of highway projects are processed as CEs, noting that the Federal Highway Administration (FHWA) within the Department of Transportation (DOT) estimated that approximately 96 percent of highway projects were processed as CEs, based on data collected in 2009. Representing the lowest proportion of CEs in the data available to us, the Forest Service reported that 78 percent of its 14,574 NEPA analyses from fiscal year 2008 to fiscal year 2012 were CEs, 20 percent were EAs, and 2 percent were EISs. Of the agencies we reviewed, DOE and the Forest Service officials told us that CEs are likely underrepresented in their totals because agency systems do not track certain categories of CEs considered “routine” activities, such as emergency preparedness planning. For example, DOE officials stated that the department has two types of CEs, those that (1) are routine (e.g., administrative, financial, and personnel actions; information gathering, analysis, and dissemination) and are not tracked and (2) are documented as required by DOE regulations. EPA publishes and maintains governmentwide information on EISs, updated when Notices of Availability for draft and final EISs are published in the Federal Register. CEQ and NAEP publish publicly available reports on EISs using EPA data. As shown in table 1, the three compilations of EIS data produce different totals. According to CEQ and EPA officials, the differences in EIS numbers shown in table 1 are likely due to different assumptions used to count the number of EISs and minor inconsistencies in the EPA data compiled for the CEQ and NAEP reports and for our analysis of EPA’s data. CEQ obtains the EIS data it reports based on summary totals provided by EPA. Occasionally, CEQ also gathers some CE, EA, and EIS data through its “data call” process, by which it aggregates information submitted by agencies that use different data collection mechanisms of varying quality. According to a January 2011 CRS report on NEPA, agencies track the total draft, final, and supplemental EISs filed, not the total number of individual federal actions requiring an EIS. In other words, agency data generally reflect the number of EIS documents associated with a project, not the number of projects. Four agencies—the Forest Service, BLM, FHWA, and the U.S. Army Corps of Engineers within the Department of Defense (DOD)—are generally the most frequent producers of EISs, accounting for 60 percent As of the EISs in 2012, according to data in NAEP’s April 2013 report. shown in table 2, these agencies account for over half of total draft and final EISs from 2008 through 2012, according to NAEP data. NAEP, Annual NEPA Report 2012 of the National Environmental Policy Act (NEPA) Practice (April 2013). Little information exists at the agencies we reviewed on the costs and benefits of completing NEPA analyses. We found that, with few exceptions, the agencies did not routinely track data on the cost of completing NEPA analyses, and that the cost associated with conducting an EIS or EA can vary considerably, depending on the complexity and scope of the project. Information on the benefits of completing NEPA analyses is largely qualitative. Complicating matters, agency activities under NEPA are hard to separate from other environmental review tasks under federal laws, such as the Clean Water Act and the Endangered Species Act; executive orders; agency guidance; and state and local laws. Little information exists on the cost of completing NEPA analyses. With few exceptions, the agencies we reviewed do not track the cost of completing NEPA analyses, although some of the agencies tracked information on NEPA time frames, which can be an element of project cost. In general, we found that the agencies we reviewed do not routinely track data on the cost of completing NEPA analyses. According to CEQ officials, CEQ rarely collects data on projected or estimated costs related to complying with NEPA. EPA officials also told us that there is no governmentwide mechanism to track the costs of completing EISs. Similarly, most of the agencies we reviewed do not track NEPA cost data. For example, Forest Service officials said that tracking the cost of completing NEPA analyses is not currently a feature of their NEPA data collection system. Complicating efforts to record costs, applicants may, in some cases, provide environmental analyses and documentation or enter into an agreement with the agency to pay for the preparation of NEPA analyses and documentation needed for permits issued by federal agencies. Agencies generally do not report costs that are “paid by the applicant” because these costs reflect business transactions between applicants and their contractors and are not available to agency officials. Two NEPA-related studies completed by federal agencies illustrate how it is difficult to extract NEPA cost data from agency accounting systems. An August 2007 Forest Service report on competitive sourcing for NEPA compliance stated that it is “very difficult to track the actual cost of performing NEPA. Positions that perform NEPA-related activities are currently located within nearly every staff group, and are funded by a large number of budget line items. There is no single budget line item or budget object code to follow in attempting to calculate the costs of doing NEPA.” Similarly, a 2003 study funded by FHWA evaluating the performance of environmental “streamlining” noted that NEPA cost data would be difficult to segregate for analysis. However, DOE tracks limited cost data associated with NEPA analyses. DOE officials told us that they track the funds the agency pays to contractors to prepare NEPA analyses and does not track other costs, such as the time spent by DOE employees. According to DOE data, the average payment to a contractor to prepare an EIS from calendar year 2003 through calendar year 2012 was $6.6 million, with the range being a DOE’s median EIS contractor low of $60,000 and a high of $85 million. cost was $1.4 million over that time period. More recently, DOE’s March 2014 NEPA quarterly report stated that for the 12 months that ended December 31, 2013, the median cost for the preparation of four EISs for which cost data were available was $1.7 million, and the average cost was $2.9 million. For context, a 2003 task force report to CEQ—the only available source of governmentwide cost estimates—estimated that an EIS typically cost from $250,000 to $2 million. In comparison, DOE’s payments to contractors to produce an EA ranged from $3,000 to $1.2 million with a median cost of $65,000 from calendar year 2003 through calendar year 2012, according to DOE data. In its March 2014 NEPA quarterly report, DOE stated that, for the 12 months that ended December 31, 2013, the median cost for the preparation of 8 EAs was $73,000, and the average cost was $301,000. For governmentwide context, the 2003 task force report to CEQ estimated that an EA typically costs from $5,000 to $200,000.no cost data on CEs but stated that the cost of a CE—which, in many cases, is for a “routine” activity, such as repainting a building—was generally much lower than the cost of an EA. Some governmentwide information is available on time frames for completing EISs—which can be one element of project cost—but few estimates exist for EAs and CEs because most agencies do not collect information on the number and type of NEPA analyses, and few guidelines exist on time frames for completing environmental analyses (see app. III for information on CEQ NEPA time frame guidelines). NAEP annually reports information on EIS time frames by analyzing information published by agencies in the Federal Register, with the Notice of Intent to complete an EIS as the “start” date, and the Notice of Availability for the final EIS as the “end” date. Our review did not identify other governmentwide sources of these data. Based on the information published in the Federal Register, NAEP reported in April 2013 that the 197 final EISs in 2012 had an average preparation time of 1,675 days, or 4.6 years—the highest average EIS preparation time the organization had recorded since 1997. From 2000 through 2012, according to NAEP, the total annual average governmentwide EIS preparation time increased at an average rate of 34.2 days per year. In addition, some agency officials told us that time frame measures for EISs may not account for up-front work that occurs before the Notice of Intent to produce an EIS—the “start” date typically used in EIS time frame calculations. DOT officials told us that the “start” date is unclear in some cases because of the large volume of project development and planning work that occurs before a Notice of Intent is issued. DOE officials made a similar point, noting that time frames are difficult to determine for many NEPA analyses because there is a large volume of up-front work that is not captured by standard time frame measures. According to technical comments from CEQ and federal agencies, to ensure consistency in its NEPA metrics, DOE measures EIS completion time from the date of publication of the Notice of Intent to the date of publication of the notice of availability of the final EIS. Further, according to a 2007 CRS report, a project may stop and restart for any number of reasons that are unrelated to NEPA or any other environmental requirement. year time frame to complete a project may have been associated with funding issues, engineering requirements, changes in agency priorities, delays in obtaining nonfederal approvals, or community opposition to the project, to name a few. CRS, The National Environmental Policy Act: Streamlining NEPA, RL33267 (Washington, D.C.: Dec. 6, 2007). their EAs are generally completed in about 1 month but that they may take up to 6 months depending on their complexity. In addition, DOT officials said that determining the start time of EAs and CEs is even more difficult than for EISs. The time for completing these can depend in large part on how much of the up-front work was done already as part of the preliminary engineering process and how many other environmental processes are involved (e.g., consultations under the Endangered Species Act). The little governmentwide information that is available on CEs shows that they generally take less time to complete than EAs. DOE does not track completion times for CEs, but agency officials stated that they usually take 1 or 2 days. Similarly, officials at Interior’s Office of Surface Mining reported that CEs take approximately 2 days to complete. In contrast, Forest Service took an average of 177 days to complete CEs in fiscal year 2012, shorter than its average of 565 days for EAs, according to agency documents. The Forest Service documents its CEs with Decision Memos, which are completed after all necessary consultations, reviews, and other determinations associated with a decision to implement a particular proposed project are completed. According to agency officials, information on the benefits of completing NEPA analyses is largely qualitative. We have previously reported that assessing the benefits of federal environmental requirements, including those associated with NEPA, is difficult because the monetization of environmental benefits often requires making subjective decisions on key assumptions. According to studies and agency officials, some of the qualitative benefits of NEPA include its role as a tool for encouraging transparency and public participation and in discovering and addressing the potential effects of a proposal in the early design stages to avoid problems that could end up taking more time and being more costly in the long run. Encouraging public participation. NEPA is intended to help government make informed decisions, encourage the public to participate in those decisions, and make the government accountable for its decisions. Public participation is a central part of the NEPA process, allowing agencies to obtain input directly from those individuals who may be affected by a federal action. DOE officials referred to this public comment component of NEPA as a piece of “good government architecture,” and DOD officials similarly described NEPA as a forum for resolving organizational differences by promoting interaction between interested parties inside and outside the government. Likewise, the National Park Service within Interior uses its Planning, Environment, and Public Comment (PEPC) system as a comprehensive information and public comment site for National Park Service projects, including those requiring NEPA analyses. CRS, The Role of the Environmental Review Process in Federally Funded Highway Projects: Background and Issues for Congress, R42479, (Washington, D.C.: Apr. 11, 2012). environmental outcomes brought about through the NEPA process. DOE has also published a document showing its NEPA “success stories.” In one example from this document, DOE cited the November 28, 2008, Final Programmatic EIS for the Designation of Energy Corridors on Federal Lands in 11 Western States (DOE/EIS-0386), that it had developed in cooperation with BLM. In this case, public comments resulted in the consideration of alternative routes and operating procedures for energy transmission corridors to avoid sensitive environmental resources. Agency activities under NEPA are hard to separate from other required environmental analyses, further complicating the determination of costs and benefits. CEQ’s NEPA regulations specify that, to the fullest extent possible, agencies must prepare NEPA analyses concurrently with other environmental requirements. CEQ’s March 6, 2012, memorandum on Improving the Process for Preparing Efficient and Timely Environmental Reviews under the National Environmental Policy Act states that agencies “must integrate, to the fullest extent possible, their draft EIS with environmental impact analyses and related surveys and studies required by other statutes or executive orders, amplifying the requirement in the CEQ regulations. The goal should be to conduct concurrent rather than sequential processes whenever appropriate.” Different types of environmental analyses may also be conducted in response to other requirements under federal laws such as the Clean Water Act and the Endangered Species Act; executive orders; agency guidance; and state and local laws. As reported in 2011 by CRS, NEPA functions as an “umbrella” statute; any study, review, or consultation required by any other law that is related to the environment should be conducted within the framework of the NEPA process. As a result, the biggest challenge in determining the costs and benefits of NEPA is separating activities under NEPA from activities under other environmental laws. According to DOT officials, the dollar costs for developing a NEPA analysis reported by agencies also includes costs for developing analyses required by a number of other federal laws, executive orders, and state and local laws, which potentially could be a significant part of the cost estimate. Similarly, DOD officials stated that NEPA is one piece of the larger environmental review process involving many environmental requirements associated with a project. As noted by officials from the Bureau of Reclamation within Interior, the NEPA process by design incorporates a multitude of other compliance issues and provides a framework and orderly process—akin to an assembly line— which can help reduce delays. In some instances, a delay in NEPA is the result of a delay in an ancillary effort to comply with another law, according to these officials and a wide range of other sources. Some information is available on the frequency and outcome of NEPA litigation. Agency data, interviews with agency officials, and available studies indicate that most NEPA analyses do not result in litigation, although the impact of litigation could be substantial if a lawsuit affects numerous federal decisions or actions in several states. The federal government prevails in most NEPA litigation, according to CEQ and NAEP data, and legal studies. Agency data, interviews with agency officials, and available studies indicate that most NEPA analyses do not result in litigation. While no governmentwide system exists to track NEPA litigation or its associated costs, NEPA litigation data are available from CEQ, the Department of Justice, and NAEP. Appendix IV describes how these sources gather information in different ways for different purposes. The number of lawsuits filed under NEPA has generally remained stable following a decline after the early years of implementation, according to CEQ and other sources. NEPA litigation began to decline in the mid- 1970s and has remained relatively constant since the late 1980s, as reported by CRS in 2007. More specifically, 189 cases were filed in 1974, according to the twenty-fifth anniversary report of CEQ. In 1994, 106 NEPA lawsuits were filed. Since that time, according to CEQ data, the number of NEPA lawsuits filed annually has consistently been just above or below 100, with the exception of a period in the early- and mid- 2000s. In 2011, the most recent data available, CEQ reported 94 NEPA cases, down from the average of 129 cases filed per year from 2001 through 2008. In 2012, U.S. Courts of Appeals issued 28 decisions involving implementation of NEPA by federal agencies, according to NAEP data. Although the number of NEPA lawsuits is relatively small when compared with the total number of NEPA analyses, one lawsuit can affect numerous federal decisions or actions in several states, having a far-reaching impact. In addition to CEQ regulations and an agency’s own regulations, according to a 2011 CRS report, preparers of NEPA analyses and documentation may be mindful of previous judicial interpretation in an attempt to prepare a “litigation-proof” EIS.an effort may lead to an increase in the cost and time needed to complete NEPA analyses but not necessarily to an improvement in the quality of the documents ultimately produced. The federal government prevails in most NEPA litigation, according to CEQ and NAEP data and other legal studies. CEQ annually publishes survey results on NEPA litigation that identify the number of cases involving a NEPA-based cause of action; federal agencies that were identified as a lead defendant; and general information on plaintiffs (i.e., grouped into categories, such as “public interest groups” and “business groups”); reasons for litigation; and outcomes of the cases decided during the year. In general, according to CEQ data, NEPA case outcomes are about evenly split between those involving challenges to EISs and those involving other challenges to the adequacy of NEPA analyses (e.g., EAs and CEs). The federal government successfully defended its decisions in more than 50 percent of the cases from 2008 through 2011. For example, in 2011, 99 of the 146 total NEPA case dispositions—68 percent— reported by CEQ resulted in a judgment favorable to the federal agency being sued or a dismissal of the case without settlement. In 2011, that rate increased to 80 percent if the 18 settlements reported by CEQ were considered successes. However, the CEQ data do not present enough case-specific details to determine whether the settlements should be considered as favorable dispositions. The plaintiffs, in most cases, were public interest groups. Reporting litigation outcome data similar to CEQ’s, a January 2014 article on Forest Service land management litigation found that the Forest Service won nearly 54 percent of its cases and lost about 23 percent. About 23 percent of the cases were settled, which the study found to be an important dispute resolution tool. Litigants generally challenged logging projects, most frequently under the National Environmental Policy Act and the National Forest Management Act. The article found that the Forest Service had a lower success rate in cases where plaintiffs advocated for less resource use (generally initiated by environmental groups) compared to cases where greater resource use was advocated. The report noted that environmental groups suing the Forest Service for less resource use not only have more potential statutory bases for legal challenges available to them than groups seeking more use of national forest resources, but there are also more statutes that relate directly to enhancing public participation and protecting natural resources. Other sources of information also show that the federal government prevails in most NEPA litigation. For example, NAEP’s 2012 annual NEPA report stated that the government prevailed in 24 of the 28 cases (86 percent) decided by U.S. Courts of Appeals. A NEPA legal treatise similarly reports that “government agencies almost always win their case when the adequacy of an EIS is challenged, if the environmental analysis is reasonably complete. Adequacy cases raise primarily factual issues on which the courts normally defer to the agency. The success record in litigation is more evenly divided when a NEPA case raises threshold questions that determine whether the agency has complied with the statute. An example is a challenge to an agency decision that an EIS was not required. Some lower federal courts are especially sensitive to agency attempts to avoid their NEPA responsibilities.” NAEP also provides detailed descriptions of cases decided by U.S. Courts of Appeals in its annual reports. We provided a draft of this product to the Council on Environmental Quality (CEQ) for governmentwide comments in coordination with the Departments of Agriculture, Defense, Energy, Interior, Justice, and Transportation, and the Environmental Protection Agency (EPA). In written comments, reproduced in appendix V, CEQ generally agreed with our findings. CEQ and federal agencies also provided technical comments that we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees; Chair of the Council on Environmental Quality; Secretaries of Defense, Energy, the Interior, and Transportation; Attorney General; Chief of the Forest Service within the Department of Agriculture; Administrator of EPA; and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact us at (202) 512-3841 or [email protected]; or [email protected]; and (202) 512-4523 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This appendix provides information on the scope of work and the methodology used to collect information on how we described the (1) number and type of National Environmental Policy Act (NEPA) analyses, (2) costs and benefits of completing those analyses, and (3) frequency and outcomes of related litigation. We included available information on both costs and benefits to be consistent with standard economic principles for evaluating federal programs and generally accepted government auditing standards. To respond to these objectives, we reviewed relevant publications, obtained documents and analyses from federal agencies, and interviewed federal officials and individuals from academia and a professional association with expertise in conducting NEPA analyses. Specifically, to describe the number and type of NEPA analyses and what is known about the costs and benefits of NEPA analyses, we reported information identified through the literature review, interviews, and other sources. We selected the Departments of Defense, Energy, the Interior, and Transportation and the Forest Service within the U.S. Department of Agriculture for analysis because they generally complete the most NEPA analyses. Our findings for these agencies are not generalizeable to other federal agencies. To assess the availability of information to respond to these objectives, we (1) conducted a literature search and review with the assistance of a technical librarian; (2) reviewed our past work on NEPA and studies from the Congressional Research Service; (3) obtained documents and analyses from federal agencies; and (4) interviewed officials who oversee federal NEPA programs from the Departments of Defense, Energy, the Interior, Justice, and Transportation; the Forest Service within the Department of Agriculture; the Environmental Protection Agency (EPA); the Council on Environmental Quality (CEQ) within the Executive Office of the President; and individuals with expertise from academia and the National Association of Environmental Professionals (NAEP)—a professional association representing private and government NEPA practitioners. Specifically, to describe the number and type of NEPA analyses from calendar year 2008 through calendar year 2012, we analyzed data identified through the literature review and interviews. We focused on data and documents maintained by CEQ, EPA, and NAEP. CEQ and NAEP periodically report data on the number of certain types of NEPA analyses, and EPA maintains a database of Environmental Impact Statements, one of its roles in implementing NEPA. To generate information on the number of Environmental Impact Statements from EPA’s database, we sorted the data by calendar year and counted the number of analyses for each year. We did not conduct an extensive evaluation of this database, although a high-level analysis discovered potential inconsistencies. For example, EPA’s database contained entries with the same unique identifier, making it difficult to identify the exact number of NEPA analyses. We discussed these inconsistencies with EPA officials, who told us that they were aware of certain errors due to manual data entry and the use of different analysis methods. These officials said that EPA EIS data provided to others may differ because EPA periodically corrects the manually entered data. We did not count duplicate records in our analysis of EPA’s data. We believe these data are sufficiently reliable for the purposes of this report. To describe what is known about the costs and benefits of NEPA analysis, we reported the available information on the subject identified through the literature review and interviews. To describe the frequency and outcome of NEPA litigation we (1) reviewed laws, regulations, and agency guidance; (2) reviewed NEPA litigation data generated by CEQ and NAEP; (3) interviewed Department of Justice officials; and (4) reviewed relevant legal studies. Information from these sources is cited in footnotes throughout this report. To answer the various objectives, we relied on data from several sources. To assess the reliability of data collected by agencies and NAEP, we reviewed existing documentation, when available, and interviewed officials knowledgeable about the data. We found all data sufficiently reliable for the purposes of this report. We conducted this performance audit from June 2013 to April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Federal National Environmental Policy Act (NEPA) data collection efforts vary by agency. The Council on Environmental Quality’s (CEQ) NEPA implementing regulations set forth requirements that federal agencies must adhere to, and require federal agencies to adopt their own procedures, as necessary, that conform with NEPA and CEQ’s regulations. Federal agencies decide how to apply CEQ regulations in the NEPA process. According to a 2007 Congressional Research Service (CRS) report, the CEQ regulations were meant to be generic in nature, with individual agencies formulating procedures applicable to their own projects. The report states that this approach was taken because of the diverse nature of projects and environmental impacts managed by federal agencies with unique mandates and missions. Consequently, NEPA procedures vary to some extent from agency to agency, and comprehensive governmentwide data on NEPA analyses are generally not centrally collected. As stated by a CEQ official, “there is no master NEPA spreadsheet, and there are many gaps in NEPA-related data collected across the federal government.” To obtain information on agency NEPA activities, the official said that CEQ works closely with its federal agency NEPA contact group, composed of key officials responsible for implementing NEPA in each agency. CEQ meets regularly with these officials and uses this network to collect NEPA-related information through requests for information, whereby CEQ distributes a list of questions to relevant agencies and then collects and reports the answers. According to CEQ officials, NEPA data reported by CEQ are generated through these requests, which have quality assurance limitations because related activities at federal departments are themselves diffused throughout various offices. Of the agencies we reviewed, the Departments of Defense, the Interior, and Transportation do not centrally collect information on NEPA analyses, allowing component agencies to collect the information, whereas the Department of Energy and the Forest Service within the Department of Agriculture aggregate certain data. Department of Defense (DOD). Each of the military services and defense agencies collects data on NEPA analyses, but DOD does not aggregate information that is collected on the number and type of NEPA analyses at the departmentwide level. Data collection within the military services and agencies is decentralized, according to DOD officials. For example, the Army collects Environmental Impact Statement (EIS) data at the Armywide level, and responsibility for Environmental Assessments (EA) and Categorical Exclusions (CE) are delegated to the lowest possible command level. DOD officials said that each of the services and defense agencies works to maintain a balance between the work that needs to be completed and the management effort needed to accomplish that work. While the level of information collected may vary by service or defense agency, each collects the information that it has determined necessary to manage its NEPA workload. According to these officials, every new information system and data call must generally come from existing funding, taking resources from other tasks. Department of the Interior (Interior). Data are not collected at the department level, according to Interior officials, and Interior conducts its own departmentwide data calls to component bureaus and entities whenever CEQ asks for NEPA-related information. The data collection efforts of its individual bureaus vary considerably. For example, the National Park Service uses its Planning, Environment, and Public Comment (PEPC) system as a comprehensive information and public comment site for National Park Service projects. Other Interior bureaus are beginning to track information or rely on less formal systems and not formalized databases. For example, the Bureau of Indian Affairs uses its internal NEPA Tracker system—started in September 2012—which the bureau states is to collect information on NEPA analyses to create a better administrative record to potentially identify new categories of CEs for future development and use. Prior to the NEPA Tracker system, the Bureau of Indian Affairs tracked NEPA analyses less formally, with varying information quality across the bureau’s different entities, according to agency officials. According to Bureau of Land Management officials, the bureau has developed and is currently implementing its ePlanning system, a comprehensive, bureau-wide, Internet-based tool for writing, reviewing, publishing, and receiving public commentary on land use plans and NEPA documents. The tool is fully operational, and the bureau expects to complete implementation in 2015. At the Bureau of Reclamation, NEPA activities are cataloged and tracked by each region or area office according to local procedures, and the information on the number and type of NEPA analyses resides with these offices. NEPA information at the Fish and Wildlife Service, according to agency officials, is collected at the refuge level. Department of Transportation (DOT). According to agency officials, each DOT administration—such as the Federal Highway Administration (FHWA), which funds highway projects; the Federal Motor Carrier Safety Administration, which develops commercial motor vehicle and driver regulations; and the Federal Aviation Administration, which is responsible for, among other things, the nation’s air traffic control system—has its own NEPA operating and data collection procedures that track NEPA- related information to varying degrees because each mode of transportation has different characteristics and needs. Environmental reviews for highway projects funded by FHWA have long been of interest to Congress and federal, state, and local stakeholders. FHWA and its 52 division offices have traditionally used an internal data system to track EIS documents. FHWA officials told us that they are in the process of replacing the agency’s legacy system with the new Project and Program Action Information (PAPAI) system, which went online in March 2013. PAPAI is capable of tracking information on EISs, EAs, and CEs, including project completion time frames, but its use is not mandatory, according to DOT officials. Department of Energy (DOE). The Office of NEPA Policy and Compliance within DOE maintains a website where it posts extensive agencywide NEPA documentation, including information on the number and type of NEPA analyses completed since the mid-1990s and a series of quarterly lessons learned reports documenting certain NEPA performance metrics, DOE’s September 2013 quarterly including information on time and cost. report documents available information on its NEPA analysis workload, completion times, and costs from 2003 through 2012. DOE began tracking cost and completion time metrics in the mid-1990s because it was concerned about the timeliness and cost of NEPA reviews.officials told us they collect these data because, in their view, “what gets measured gets done.” Making DOE NEPA analyses easily available allows others to apply the best practices and potentially avoid costly litigation, according to DOE officials. Department of Agriculture’s Forest Service. The Forest Service’s computer system, known as the Planning, Appeals, and Litigation System, provides information for responding to congressional requests for NEPA data, to support preparation for responding to lawsuits, and about overall project objectives and design. As stated by agency officials, data from the system can be used to identify trends in the preparation of NEPA analyses over time. This information can be valuable to managers in managing overall NEPA compliance and can identify innovative ways to deal with recurring environmental issues that affect projects, according to Forest Service officials. The system also provides tools to help the agency meet NEPA requirements, including automatic distribution of the schedule of proposed NEPA actions, a searchable database of draft EISs, and electronic filing of draft and final EISs to EPA. CEQ also identified as a best practice the service’s electronic Management of NEPA (eMNEPA) pilot—a suite of web-based tools and databases to improve the efficiency of environmental reviews by enabling online submission and processing of public comments, among other things. On March 17, 2011, CEQ invited members of the public and federal agencies to nominate projects employing innovative approaches to complete environmental reviews more efficiently and effectively. On August 31, 2011, CEQ announced that eMNEPA was selected as part of the first NEPA pilot project. CEQ officials told us that they would prioritize the use of CEQ oversight resources to focus on identifying, disseminating, and encouraging agencies to use their additional resources in improving operational efficiency through tools like eMNEPA rather than focusing on improved data collection and reporting. Specifically, CEQ officials said that information technology tools that enable easy access to relevant technical information across the federal government are also of value in enhancing the ability of agencies to conduct efficient and timely NEPA environmental reviews. “. . . even large complex energy projects would require only about 12 months for the completion of the entire EIS process. For most major actions, this period is well within the planning time that is needed in any event, apart from NEPA. The time required for the preparation of program EISs may be greater. The Council also recognizes that some projects will entail difficult long-term planning and/or the acquisition of certain data which of necessity will require more time for the preparation of the EIS. Indeed, some proposals should be given more time for the thoughtful preparation of an EIS and development of a decision which fulfills NEPA’s substantive goals. For cases in which only an environmental assessment will be prepared, the NEPA process should take no more than 3 months, and in many cases substantially less, as part of the normal analysis and approval process for the action.” CEQ’s National Environmental Policy Act (NEPA) regulations do not specify a required time frame for completing NEPA analyses. The regulations state that CEQ has decided that prescribed universal time limits for the entire NEPA process are too inflexible. The regulations also state that federal agencies are encouraged to set time limits appropriate to individual actions and should take into consideration factors such as the potential for environmental harm, size of the proposed action, and degree of public need for the proposed action, including the consequences of delay. CEQ’s March 6, 2012, memorandum on Improving the Process for Preparing Efficient and Timely Environmental Reviews under the National Environmental Policy Act encourages agencies to develop meaningful and expeditious timelines for environmental reviews, and it amplifies the factors an agency should take into account when setting time limits, noting that establishing appropriate and predictable time limits promotes the efficiency of the NEPA process. The CEQ regulations also require agencies to reduce delay by, among other things, integrating the NEPA process into early project planning, emphasizing interagency cooperation, integrating NEPA requirements with other environmental review requirements, and adopting environmental documents prepared by other federal agencies. In general, there is no governmentwide system to track National Environmental Policy Act (NEPA) litigation and its associated costs. The Council on Environmental Quality (CEQ), the Department of Justice, and the National Association of Environmental Professionals (NAEP) gather NEPA litigation information in different ways for different purposes. CEQ collects NEPA litigation data through periodic requests for information, whereby it distributes a list of questions to the general counsel offices of relevant agencies and then collects and reports the information on its website. CEQ’s NEPA litigation survey presents information on NEPA-based claims brought against agencies in court, including aggregated information on types of lawsuits and who brought the suits. The survey results do not present information on the cost of NEPA litigation because, according to officials from several of the agencies we reviewed, agencies do not track this information. For example, Forest Service officials told us that they do not centrally track the cost or time associated with the preparation for litigation. As another example, the Department of Energy’s litigation data do not include the cost of litigation or the time spent on litigation-related tasks, although it includes the number of NEPA-related cases over time. The Department of Justice defends nearly all federal agencies when they face NEPA litigation. Management System database tracks limited information on NEPA cases handled by the Environment and Natural Resources Division, and the Executive Office for U.S. Attorneys case management system, called the Legal Information Office Network System, tracks NEPA cases at individual U.S. Attorneys’ Offices to some extent. However, Department of Justice officials told us that these systems do not interface with each other, so it would be impossible to gather comprehensive information on NEPA litigation from the Department of Justice. Such litigation is handled both by the Department of Justice’s Environment and Natural Resources Division and by individual U.S. Attorneys’ Offices depending upon the agency, the type of case, and the expertise of the department’s personnel. Agency personnel provide the Department of Justice with the administrative record that forms the basis of judicial review and provide assistance throughout the litigation process, as needed. Further, Department of Justice officials told us that the department is not able to comprehensively identify all NEPA litigation because a single case could have numerous other environmental claims in addition to a single NEPA claim. In such instances, the Environment and Natural Resources Division’s Case Management System may not capture every claim raised in the case. As a result, the Department of Justice does not track trends in NEPA litigation or staff hours spent on NEPA cases. The cost of collecting the information would outweigh the management benefits of doing so, according to these officials. The Department of Justice’s NEPA litigation data are not comparable to CEQ’s because the department’s system is designed to track cases, while CEQ provides information on NEPA events—such as the number of cases filed, number of injunctions or remands, and other decisions. There could be multiple NEPA events or decisions related to a single case. Department of Justice officials stated that they would not be able to reconcile CEQ’s information with information in Department of Justice systems. NEPA litigation data collected by the third source—NAEP—differ from those collected by CEQ or the Department of Justice. NAEP collects information on NEPA cases decided by U.S. Courts of Appeals because these cases are generally the most significant to the NEPA practitioners that are NAEP’s members, according to NAEP officials. The NAEP report contains case study summaries of the latest developments in NEPA litigation to help NEPA practitioners understand how to account for new court-mandated requirements in NEPA analyses and does not attempt to track all NEPA litigation across the government. In addition to the individuals named above, Anne Johnson and Harold Reich (Assistant Directors); Ronnie Bergman; Cindy Gilbert; Richard P. Johnson; Terence Lam; Alison O’Neill; Pepe Thompson; and John Wren made key contributions to this report.
|
NEPA requires all federal agencies to evaluate the potential environmental effects of proposed projects--such as roads or bridges--on the human environment. Agencies prepare an EIS when a project will have a potentially significant impact on the environment. They may prepare an EA to determine whether a project will have a significant potential impact. If a project fits within a category of activities determined to have no significant impact--a CE--then an EA or an EIS is generally not necessary. The adequacy of these analyses has been a focus of litigation. GAO was asked to review various issues associated with completing NEPA analyses. This report describes information on the (1) number and type of NEPA analyses, (2) costs and benefits of completing those analyses, and (3) frequency and outcomes of related litigation. GAO included available information on both costs and benefits to be consistent with standard economic principles for evaluating federal programs, and selected the Departments of Defense, Energy, the Interior, and Transportation, and the USDA Forest Service for analysis because they generally complete the most NEPA analyses. GAO reviewed documents and interviewed individuals from federal agencies, academia, and professional groups with expertise in NEPA analyses and litigation. GAO's findings are not generalizeable to agencies other than those selected. This report has no recommendations. GAO provided a draft to CEQ and agency officials for review and comment, and they generally agreed with GAO's findings. Governmentwide data on the number and type of most National Environmental Policy Act (NEPA) analyses are not readily available, as data collection efforts vary by agency. NEPA generally requires federal agencies to evaluate the potential environmental effects of actions they propose to carry out, fund, or approve (e.g., by permit) by preparing analyses of different comprehensiveness depending on the significance of a proposed project's effects on the environment--from the most detailed Environmental Impact Statements (EIS) to the less comprehensive Environmental Assessments (EA) and Categorical Exclusions (CE). Agencies do not routinely track the number of EAs or CEs, but the Council on Environmental Quality (CEQ)--the entity within the Executive Office of the President that oversees NEPA implementation--estimates that about 95 percent of NEPA analyses are CEs, less than 5 percent are EAs, and less than 1 percent are EISs. Projects requiring an EIS are a small portion of all projects but are likely to be high-profile, complex, and expensive. The Environmental Protection Agency (EPA) maintains governmentwide information on EISs. A 2011 Congressional Research Service report noted that determining the total number of federal actions subject to NEPA is difficult, since most agencies track only the number of actions requiring an EIS. Little information exists on the costs and benefits of completing NEPA analyses. Agencies do not routinely track the cost of completing NEPA analyses, and there is no governmentwide mechanism to do so, according to officials from CEQ, EPA, and other agencies GAO reviewed. However, the Department of Energy (DOE) tracks limited cost data associated with NEPA analyses. DOE officials told GAO that they track the money the agency pays to contractors to conduct NEPA analyses. According to DOE data, its median EIS contractor cost for calendar years 2003 through 2012 was $1.4 million. For context, a 2003 task force report to CEQ--the only available source of governmentwide cost estimates--estimated that a typical EIS cost from $250,000 to $2 million. EAs and CEs generally cost less than EISs, according to CEQ and federal agencies. Information on the benefits of completing NEPA analyses is largely qualitative. According to studies and agency officials, some of the qualitative benefits of NEPA include its role in encouraging public participation and in discovering and addressing project design problems that could be more costly in the long run. Complicating the determination of costs and benefits, agency activities under NEPA are hard to separate from other required environmental analyses under federal laws such as the Endangered Species Act and the Clean Water Act; executive orders; agency guidance; and state and local laws. Some information is available on the frequency and outcome of NEPA litigation. Agency data, interviews with agency officials, and available studies show that most NEPA analyses do not result in litigation, although the impact of litigation could be substantial if a single lawsuit affects numerous federal decisions or actions in several states. In 2011, the most recent data available, CEQ reported 94 NEPA cases filed, down from the average of 129 cases filed per year from calendar year 2001 through calendar year 2008. The federal government prevails in most NEPA litigation, according to CEQ and legal studies.
|
Certain characteristics of commercial trucks and buses make them inherently vulnerable to terrorist attacks and therefore difficult to secure. The commercial trucking and bus industries are open by design, with multiple access points and terminals so that vehicles can move large numbers of people and volumes of goods quickly. The openness of this sector and the large numbers of riders and quantities of goods on vehicles with access to metropolitan areas or tourist destinations also make them both difficult to secure and attractive targets for terrorists because of the potential for mass casualties and economic damage and disruption. In addition, the multitude of private commercial truck and bus companies and their diversity in size and cargo complicate efforts to develop security measures and mitigation strategies that are appropriate for the entire industry. Between 1997 and 2008 there were 510 terrorist-related commercial truck and bus bombing attacks worldwide, killing over 6,000 people, with 106 bombings occurring during 2007 alone, killing over 2,500 people. Of the 510 bombings since 1997, 364 have been bus bombings and 146 have been truck bombings; 156 have been in Iraq and 354 have been in countries other than Iraq. In 2007, the use of truck bombs as a terrorist tactic more than tripled and resulted in 2072 deaths. While trucks were involved in just 29 percent of the bombings since 1997, they accounted for 56 percent of the deaths. Vehicle Borne Improvised Explosive Devices (VBIEDs) are vehicles loaded with a range of explosive materials that are detonated when they reach their target. VBIEDs can also be used to explode flammable fuel trucks, and disperse toxic substances. Terrorists have used a variety of trucks—rental, refrigerator, cement, dump, sewerage, gasoline tanker, trucks with chlorine and propane tanks, and fire engines—to attack a broad range of critical infrastructure, including police and military facilities, playgrounds, childcare centers, hotels, and bridges. Worldwide, commercial buses have also been attacked numerous times, including in Israel, England, Iraq, the Philippines, Lebanon, Sri Lanka, India, Russia, and Pakistan. In the United States, terrorists used a commercial truck containing fertilizer-based explosives to attack the World Trade Center in 1993, killing 6 and injuring 1,000 people. Two years later, a similar attack occurred at the Alfred P. Murrah Federal Building in Oklahoma City, Oklahoma, killing 168 people and injuring more than 800. Terrorists have also targeted overseas U.S. military personnel with commercial VBIEDs at the Marine barracks in Lebanon (1983), Khobar Towers in Saudi Arabia (1996), and at U.S. embassies in Kuwait (1983), Lebanon (1984), Kenya (1998), and Tanzania (1998). Figure 2 charts the number of worldwide bombings involving commercial truck or buses since the 1997. See appendix II for more information on truck and bus bombing incidents. DHS and DOT share responsibility for securing the commercial vehicle sector. Prior to the terrorist attacks of September 11, 2001, DOT was the primary federal entity involved in regulating commercial vehicles. In response to September 11, 2001, Congress passed the Aviation and Transportation Security Act (ATSA) of 2001, which created and conferred upon TSA broad responsibility for securing all transportation sectors. In 2002, Congress passed the Homeland Security Act, which established DHS, transferred TSA into DHS, and gave DHS responsibility for protecting the nation from terrorism, including securing the nation’s transportation systems. Although TSA is the lead agency responsible for the security of commercial vehicles, including those carrying hazardous materials, DOT maintains a regulatory role with respect to hazardous materials. Specifically, DOT continues to issue and enforce regulations governing the safe transportation of hazardous materials. In addition, the Homeland Security Act expanded DOT’s responsibility to include ensuring the security, as well as the safety, of the transportation of hazardous materials. Accordingly, within DOT, PHMSA is responsible for developing, implementing, and revising security plan requirements for carriers of hazardous materials, while FMCSA inspectors enforce these regulations through reviews of the content and implementation of these security plans. In 2004, based on a recommendation we made, DHS and DOT entered into a memorandum of understanding (MOU) to delineate the agencies’ roles and responsibilities with respect to transportation security. In 2006, TSA and PHMSA completed an annex to the MOU related to the transportation of hazardous materials. This annex identifies TSA as the lead federal entity for the security of the transportation of hazardous materials, and PHMSA as responsible for promulgating and enforcing regulations and administering a national program of safety and security related to the transportation of hazardous materials. In addition, the 9/11 Commission Act requires that, by August 2008, DHS and DOT complete an annex to the MOU that would govern the roles of the two agencies regarding the security of commercial motor vehicles. State and local governments also play a key role in securing commercial vehicles. States own, operate, and have law enforcement jurisdiction over significant portions of the infrastructure—including highways, tunnels, and bridges—that commercial vehicles use. Further, state and local governments respond to emergencies involving commercial vehicles which travel within and through their jurisdictions daily. Many states also have departments of homeland security with firsthand knowledge of hazardous materials shippers and routing, local smuggling operations, and individuals and groups to be monitored for security reasons. Some states also have fusion centers that collect relevant law enforcement and intelligence information to coordinate the dissemination of alerts and assist in emergency response. State transportation and law enforcement officials also conduct vehicle safety inspections and compliance reviews, sometimes in coordination with FMCSA. Although all levels of government are involved in the security of commercial vehicles, primary responsibility for securing commercial vehicles rests with the individual commercial vehicle companies themselves. Truck and bus companies have responsibility for the security of day-to-day operations. As part of these operations, they ensure that company personnel, vehicles, and terminals---as well as all of the material and passengers they transport----are secured. Faced with tight competition, low margins, and, in some sectors, high driver turnover, some industry officials that we interviewed stated that devoting resources to security has remained a challenge. A variety of national organizations represent commercial trucking and motor coach industry interests. Many of these organizations disseminate pertinent security bulletin information from DHS and DOT to their members. Some have also developed and provided their members with security information and tools—such as security check lists and handbooks—to meet members’ security needs. See appendix III for a list of the major industry associations representing the truck and motor coach industries interviewed by GAO. Although ATSA, passed in November 2001, includes numerous requirements for TSA regarding securing commercial aviation, it does not include any specific requirements related to the security of land transportation sectors. However, with regard to all sectors of transportation, ATSA generally requires TSA to: receive, assess, and distribute intelligence information related to transportation security; assess threats to transportation security and develop policies, strategies, and plans for dealing with those threats, including coordinating countermeasures with other federal organizations; and, enforce security-related regulations and requirements. Other legislation, specifically the USA PATRIOT Act and the 9/11 Commission Act, requires TSA to take specific actions to ensure the security of commercial vehicles. The USA PATRIOT Act provides that a state may not issue to any individual a license to transport hazardous materials unless that individual is determined not to pose a security risk. TSA regulations require that drivers who transport hazardous materials undergo a security threat assessment that consists of an evaluation of a driver’s criminal history, immigration status, mental capacity, and connections to terrorism to determine if the driver poses a security risk. The 9/11 Commission Act also requires that the Secretary of Homeland Security, by August 2008, submit a report to Congress that includes, among other things, a security risk assessment on the trucking industry, an assessment of industry best practices to enhance security, and an assessment of actions already taken by both public and private entities to address identified security risks. The act also mandates that the Secretary develop a tracking program for motor carrier shipments of hazardous materials by February 2008. With regard to intercity buses, the act requires that the Secretary issue regulations by February 2009 requiring high-risk, over-the-road bus operators to conduct vulnerability assessments and develop and implement security plans. The act further mandates that the Secretary of Homeland Security issue regulations by February 2008 requiring all over-the-road bus operators to develop and implement security training programs for frontline employees, and that the Secretary establish a security exercise program for over-the-road bus transportation. The act also requires DOT to take specific actions related to the security of commercial vehicles. For example, the Act requires that the Secretary of Transportation, by August 2008, analyze the highway routing of hazardous materials, and develop guidance to identify and reduce safety and security risks. DOT’s PHMSA has issued regulations intended to strengthen the security of the transportation of hazardous materials. The regulations require persons who transport or offer for transportation certain hazardous materials to develop and implement security plans. Security plans must assess the security risks associated with transporting these hazardous materials and include measures to address those risks. At a minimum, the plan must include measures to (1) confirm information provided by job applicants hired for positions that involve access to and handling of hazardous materials covered by the security plan, (2) respond to the assessed risk that unauthorized persons may gain access to hazardous materials, and (3) address the assessed risk associated with the shipment of hazardous materials from origin to destination. The regulations also require that all employees who directly affect hazardous materials transportation safety receive training that provides awareness of security risks associated with hazardous materials transportation and of methods designed to enhance transportation security. Such training is also to instruct employees on how to recognize and respond to possible security threats. Additionally, each employee of a firm required to have a security plan must be trained concerning the plan and its implementation. DHS funding for commercial vehicle security consists of a general appropriation to TSA for its entire surface transportation security program, which includes commercial vehicles and highway infrastructure, rail and mass transit, and pipeline, as well as and appropriations to the Federal Emergency Management Administration (FEMA) for truck and bus security grant programs. Annual appropriations to TSA for surface transportation security for fiscal years 2006 through 2009 are presented in table 1. The number of TSA full-time employees (FTEs) dedicated to highway and motor carrier security—which includes both commercial vehicles and highway infrastructure—has remained at about 19 FTEs annually since fiscal year 2002. TSA estimates that there are approximately 1.2 million commercial trucking companies in the United States. Trucks transport the majority of freight shipped in the United States: by tonnage, 65 percent of total domestic freight; by revenue, 75 percent. According to TSA, 75 percent of U.S. communities depend solely on trucking to transport commodities. Trucks and buses have access to nearly 4 million miles of roadway in the United States. Trucking companies range in size from a single truck to several thousand trucks. According to DOT 2004 data, which are the most current available, 87 percent of trucking companies operated 6 or fewer trucks, while 96 percent operated 20 or fewer. DOT estimates that about 40,000 new commercial trucking companies enter the industry annually. As of August 2008, nearly 11.9 million commercial trucks were registered with DOT. Trucks come in a large variety of configurations and cargo body types to perform a wide range of tasks. Some trucks are used for local tasks such as construction, landscaping, or local package delivery, while others are used for transporting cargo over-the-road or for long hauls. For a more complete summary of DOT data on commercial trucking and bus firms, trucks and buses, and drivers, see appendix V. The trucking industry is diverse, involving several different sectors and including for-hire and private fleets, truckload and less-than-truckload carriers, bulk transport, hazardous materials, rental and leasing, and others. For-hire firms are those for which trucking is their primary business, while private fleets are generally used to support another business activity, such as grocery chains and construction. According to a 2002 DOT survey, for-hire trucks represented 47 percent of the industry, while private fleets represented 53 percent. While truckload carriers move loads from point to point, less-than-truckload carriers pick up smaller shipments and consolidate them at freight terminals. Bulk transport firms move bulk commodities such as gasoline, cement and corn syrup in large trailers specifically designed for each type of commodity. Truck rental and leasing companies also are part of the commercial trucking industry. Consumer rental companies rent trucks to walk-in customers for short periods of time and represent 15 percent of the rental and leasing industry. Commercial rental and leasing companies generally lease trucks for a year or longer and account for the remaining 85 percent of the rental and leasing industry. With respect to the transportation of hazardous materials, of an estimated 1.2 million commercial vehicle firms, 60,682 are registered as hazardous materials carriers, or about 5 percent of the commercial vehicle industry, and 1,778,833 drivers are licensed to transport hazardous materials. Hazardous materials are transported by truck almost 800,000 times a day, and 94 percent of hazardous material shipments are by trucks, which transport approximately 54 percent of hazardous materials volume (tons). DOT PHMSA classifies hazardous materials under 9 different classes of hazards. Most hazardous materials shipments by truck involve flammable liquids such as gasoline (81.8 percent), followed by gases (8.4 percent) and corrosive materials (4.4 percent). Class 6 toxic poisons include Toxic Inhalation Hazards (TIH) but comprise only 0.2 percent of hazardous materials transported by truck. The shipment of security sensitive hazardous materials such as Toxic Inhalation Hazards is of particular concern to TSA, although the agency estimates that they represent just .000058 percent of the commercial vehicle industry. Eighty-one percent of the Toxic Inhalation Hazards transported by truck is anhydrous ammonia and 10 percent is chlorine. Commercial bus companies represent less than 1 percent of the commercial vehicle industry, but according to TSA estimates, carry 775 million passengers annually. Intercity buses, or motor coaches, include buses with regularly scheduled routes, as well as tour and charter bus companies. In August 2008, DOT reported that there were 3,948 motor coach carriers, with 75,285 buses. Of these carriers, fewer than 100 are intercity bus companies, which transport passengers from city to city on scheduled routes, while the remaining carriers operate tour and charter buses. Most bus companies (95 percent) are small operators with fewer than 25 buses. Intercity buses, or motor coaches, serve all large metropolitan areas and travel in close proximity to some of the nation’s most visible and populated sites, such as sporting events and arenas, major tourist attractions, and national landmarks. A few intercity bus carriers also travel internationally to Canada and Mexico. According to a study commissioned by DOT, the accessibility and open nature of the motor coach industry make it difficult to protect these assets, and the level of security afforded to the infrastructure of the motor coach industry is relatively low compared to the commercial aviation sector, despite the fact that the motor coach industry handles more passengers a year. HSPD-7 directed the Secretary of DHS to establish uniform policies, approaches, guidelines, and methodologies for integrating federal infrastructure protection and risk management activities. Recognizing that each sector possesses its own unique characteristics and risk landscape, HSPD-7 designates Federal Government Sector-Specific Agencies (SSAs) for each of the critical infrastructure sectors to work with DHS to improve critical infrastructure security. On June 30, 2006, DHS released the National Infrastructure Protection Plan (NIPP), which developed—in accordance with HSPD-7—a risk-based framework for the development of Sector-Specific (SSA) strategic plans. The NIPP defines roles and responsibilities for security partners in carrying out critical infrastructure and key resources protection activities through the application of risk management principles. Figure 3 illustrates the several interrelated activities of the risk management framework as defined by the NIPP, including setting security goals and performance targets, identifying key assets and sector information, and assessing risk information including both general and specific threat information, potential vulnerabilities, and the potential consequences of a successful terrorist attack. The NIPP requires that federal agencies use this information to inform the selection of risk-based priorities and continuous improvement of security strategies and programs to protect people and critical infrastructure through the reduction of risks from acts of terrorism. The NIPP risk management framework consists of the following interrelated activities: Set security goals: Define specific outcomes, conditions, end points, or performance targets that collectively constitute an effective protective posture. Identify assets, systems, networks, and functions: Develop an inventory of the assets, systems, and networks that comprise the nation’s critical infrastructure, key resources, and critical functions. Collect information pertinent to risk management that takes into account the fundamental characteristics of each sector. Assess risks: Determine risk by combining potential direct and indirect consequences of a terrorist attack or other hazards (including seasonal changes in consequences, and dependencies and interdependencies associated with each identified asset, system, or network), known vulnerabilities to various potential attack vectors, and general or specific threat information. Prioritize: Aggregate and analyze risk assessment results to develop a comprehensive picture of asset, system, and network risk; establish priorities based on risk; and determine protection and business continuity initiatives that provide the greatest mitigation of risk. Implement protective programs: Select sector-appropriate protective actions or programs to reduce or manage the risk identified, and secure the resources needed to address priorities. Measure effectiveness: Use metrics and other evaluation procedures at the national and sector levels to measure progress and assess the effectiveness of the national Critical Infrastructure and Key Resources protection program in improving protection, managing risk, and increasing resiliency. TSA has taken actions to assess the security risks associated with the commercial vehicle sector, including assessing threats, initiating vulnerability assessments, and developing best security practices, but more work remains to fully assess the security risks of commercial trucks and buses, and to ensure that this information is used to inform TSA’s security strategy. Although TSA has completed a variety of threat assessments and is in the process of developing several threat scenarios with likelihood estimates, its key annual threat assessments do not include information about the likelihood of a terrorist attack method on a particular asset, system or network, as required by the NIPP. However, in September 2008, TSA reported that in response to the 9/11 Commission Act mandate that it submit a risk assessment report on commercial trucking security TSA was planning to use threat scenarios with likelihood assessments for highway and motor carriers. TSA has also cosponsored a large number of vulnerability assessments through a pilot initiative in the state of Missouri. However, TSA has made limited progress and has not established a plan or time frame for conducting a vulnerability assessment of the commercial vehicle sector nationwide. Moreover, TSA has not determined how it will address the June 2006 recommendations of the Missouri Pilot Program evaluation report regarding the ways in which future vulnerability assessments can be strengthened. As a result, the agency cannot ensure that its CSR efforts will fully identify the vulnerabilities of the sector. Standards for internal controls in the federal government require that findings and deficiencies reported in audits and other reviews be promptly reviewed, resolved, and corrected within established time frames. In addition, TSA has not conducted assessments of consequences of a terrorist attack on the commercial vehicle sector, or developed a plan to conduct sectorwide consequence assessments. The TSSP calls for a sectorwide approach and strategies to managing security risks, and TSA has identified one of its strategic goals as conducting an inventory of the security status of the nation’s highway and motor carrier systems. In addition, standard practices in program and project management call for developing a road map, or a program plan, to achieve programmatic results within a specified time frame or milestones. TSA has not completed a sectorwide risk assessment of the commercial vehicle sector or determined the extent to which additional risk assessment efforts are needed, nor has it developed a plan or a time frame for doing so, including an assessment of the resources required to support these efforts. In addition, TSA has not fully used available information from its ongoing risk assessments to develop and implement its security strategy. As a result, TSA cannot be assured that its approach for securing the commercial vehicle sector is aligned with the highest priority security needs. Moreover, TSA has not completed a report as required by the 9/11 Commission Act on various aspects of commercial vehicle security. TSA has and continues to conduct threat assessments of the commercial vehicle sector by reviewing known terrorist goals and capabilities, and is in the process of strengthening its efforts by developing more specific threat likelihood information to inform agency risk assessment efforts. TSA’s Office of Intelligence (OI) develops a variety of products identifying the threats from terrorism, from annual threat assessments on each transportation sector to weekly field intelligence summaries and daily briefings. OI also disseminates additional threat and suspicious incident information to key federal and nonfederal stakeholders as needed related to the commercial vehicle sector. To date, these threat assessments have found an increase in truck and bus terrorist incidents abroad and that VBIEDs were the most likely tactic. TSA OI officials stated that they continue to regard common VBIEDs as a greater threat than attacks using hazardous materials such as chlorine. OI further reported that the July 2005 bus bombing in London demonstrated the capability and intent of terrorists to bomb passenger buses in Western nations. While TSA’s threat assessments provide detailed summaries of recent attacks and incidents of interest, and are useful to TSA in informing its strategy for securing commercial vehicles, they do not include information on the likelihood of various types of threats. The NIPP requires that in the context of terrorist risk assessments, the threat component of the analysis be calculated based on the estimated likelihood of a terrorist attack method on a particular asset, system, or network. The estimate of this likelihood is to be based on an analysis of intent and capability of a defined adversary, such as a terrorist group. However, TSA has not included likelihood estimates in its annual threat assessments for the highway and motor carrier sector. In 2006, TSA developed rankings of the likelihood of various tactics—such as attacks using VBIEDs, VBIED- assisted hazardous materials, and other threats—for highway and commercial vehicles. However, TSA subsequently excluded these likelihood assessments in its 2008 annual threat assessment for the highway sector and did not provide us with the rationale for this decision. OI told us that it developed likelihood estimates for specific threat scenarios used in the draft National Transportation Sector Risk Assessment (NTSRA). NTSRA is being conducted by TSA to assess risks across the entire U.S. transportation system and contains nine high-level scenarios and threat likelihood estimates related to commercial vehicles. Of these high-level scenarios, eight involve VBIEDs, and one involves hazardous materials. OI rated the intent and capability of terrorists to perform each threat scenario to provide their estimate of the relative likelihood of each scenario. However, TSA officials could not identify when the NTSRA will be finalized. In addition, in June 2008, OI reported that it would provide likelihood assessments for threat scenarios that were to be conducted in response to a mandate in the 9/11 Commission Act that DHS submit a risk assessment report on the commercial trucking sector. While more extensive threat scenarios are being developed for the commercial vehicle sector, including likelihood estimates, TSA’s annual threat assessments do not include information on the likelihood of threat. HMC officials stated that this lack of specific threat information continues to challenge agency risk managers. Without more information on the likelihood of the various threats, there is limited assurance that TSA is focusing its efforts on the activities that pose the greatest threat. Officials stated that they may incorporate likelihood estimates in the annual highway and motor carrier threat assessments in the future, but did not have specific plans to do so. TSA has begun conducting vulnerability assessments of the commercial vehicle sector, but its efforts are in the early stages. In addition, the agency has not determined the extent to which additional vulnerability assessments are needed, and does not have a strategy or time frame for assessing sectorwide vulnerabilities. HSPD-7 requires each Sector-Specific Agency to conduct or facilitate vulnerability assessments of its sector. In addition, the NIPP states that DHS is responsible for ensuring that comprehensive vulnerability assessments are performed for critical infrastructure and key resources that are deemed nationally critical, and the TSSP further emphasizes a sectorwide system-based approach to risk management. To determine the vulnerability of commercial vehicles as targets or as weapons to attack critical infrastructure in the United States, TSA has begun conducting vulnerability assessments known as Corporate Security Reviews (CSRs). TSA initiated the CSR program in November 2005 to: (1) develop best practices for securing the commercial vehicle industry through discussions with carrier representatives and site visits to carrier facilities; (2) collect and maintain data that will allow TSA HMC to assess various aspects of security across the trucking and motor coach industries through statistical analysis of survey data; (3) identify security gaps and opportunities for improvement; (4) promote security awareness and collaboration with the commercial vehicle industry; (5) provide guidance to motor carriers on their relative level of risk exposure; and (6) determine the costs and benefits of risk mitigation activities. As of September 2008, TSA had conducted 100 CSRs of motor carriers, including 15 motor coach companies, 20 school bus companies/districts, and 65 trucking companies. These CSRs were of large firms that were identified by industry stakeholders as having the best security practices in the industry and that agreed to participate in the CSRs on a voluntary basis. TSA conducts these reviews by sending teams of two to four people from TSA headquarters to a trucking or bus company, for one or two days, to analyze the company security plan and mitigation procedures, and make informal recommendations to strengthen security based on a draft of best security practices TSA developed. At the conclusion of the CSRs, TSA prepares summary reports of its findings and informal recommendations. TSA also developed a draft best security practices in February 2006 for trucking firms based on the results of early CSRs, as well as on TSA staff expertise, industry stakeholder input, and best security practices from other transportation sectors such as rail and pipeline, according to officials. These draft best practices include measures companies can take to conduct threat, vulnerability, and consequence assessments. They also provide guidance on developing a security plan and strengthening personnel security, training, hazardous materials storage, physical security countermeasures, cyber security, and emergency response exercises. However, according to TSA officials, the agency has delayed issuing these draft best practices in final form until it can complete and incorporate public and industry comments on draft security guidance specifically for carriers of hazardous materials. The 9/11 Commission Act requires that DHS, by August 2008, submit a report to Congress that includes, among other things, an assessment of trucking industry best practices to enhance security. TSA reported that as of September 2008, it had not finalized these best practices, but they hoped to complete a template within 4 months. Officials stated that they plan to develop a flexible list of best practices that firms can adapt based on their line of work, size, and circumstances. TSA began a second CSR effort in April 2006 through a pilot project with the state of Missouri which greatly expanded the number of firms reviewed, and extended the reviews to smaller, more diverse firms. Objectives of the pilot were to promote security awareness, collect information on the security status of participating firms, and promote public and private collaboration among federal, state, and private sector stakeholders. TSA partnered with the State of Missouri, FMCSA’s Motor Carrier Safety Assistance Program, and the Commercial Vehicle Safety Alliance (CVSA) to train Missouri state safety inspectors to conduct these CSRs. DOT funded the CSRs and assisted Missouri in the selection of firms to be reviewed and interviewed. The CSRs performed by TSA headquarters staff were of large companies known to have more robust security measures in place, while the Missouri CSRs were generally conducted on small firms that are most common in the industry. Reviewing the security practices of these small firms can require inspectors to travel to remote locations all over the state. For example, one Missouri CSR we attended assessed a small landscaping company with 12 trucks, while another CSR assessed an owner-operator with a single truck in front of his house (fig. 4). Although these reviews remained voluntary, they were conducted in conjunction with mandatory safety reviews that Missouri inspectors routinely conduct on commercial vehicle trucking and motor coach firms. Motor carriers were selected for Missouri CSRs based on either their safety records as evaluated by FMCSA, or because they were newly registered firms. TSA officials stated that partnering with the state’s safety inspections enabled TSA to review a more diverse group of firms than it did during the original CSRs. Typically, the Missouri pilot CSRs involved site visits with structured interviews using a questionnaire based on TSA’s draft best security practices, and generally lasted less than an hour compared to one or two days as was the case with the original CSRs. The Missouri CSR pilot concluded in February 2007; however, TSA has continued to partner with Missouri and FMCSA to implement a permanent CSR program in the state. TSA told us that as of September 2008, 3,420 CSRs had been completed in Missouri. In September 2006, TSA awarded a contract to evaluate the extent to which the Missouri CSR pilot program met its objectives, and whether the firms reviewed had implemented effective security measures. The report reviewed the 1,251 CSRs conducted by Missouri inspectors from April 2006 through February 2007, including 1,231 trucking companies (98.4 percent), 18 motor coach companies (1.4 percent), and 2 school bus operators (0.2 percent). The evaluation reviewed each firm’s responses to the CSR questionnaire and assigned it an overall security score based on the security measures the firm reported having in place that were consistent with TSA’s draft best security practices. The contractor reported on the results of the study in June 2007 and concluded among other things, that: the interviewed carriers did not have extensive security procedures in place; small carriers and owner operators had implemented fewer security measures than larger carriers; and hazardous materials carriers identified by the contractor had implemented most of the security measures on the TSA CSR questionnaire. The evaluation report also found that while both motor coaches and nonpassenger motor carriers had low scores, motor coaches scored somewhat higher than nonpassenger motor carriers. The report concluded that the program had achieved its objectives of promoting security awareness, collecting information on the security status of participating commercial vehicle firms, and promoting public and private sector collaboration among federal, state, and private sector stakeholders. However, the report also concluded that the Missouri sample was not representative of the commercial vehicle industry in Missouri or of the industry nationwide. The report further concluded that since the CSRs were based on best practices developed for much larger firms, the CSR data did not completely reflect overall security practices and capabilities for small carriers. Missouri officials we interviewed concurred that the CSR sample was not representative of Missouri firms since the majority of carriers that do not encounter safety problems would not be included in their CSR reviews. The evaluation report of the Missouri CSR pilot made a number of recommendations to TSA to expand and improve the CSR program. These recommendations included that TSA: review and address CSR pilot program deficiencies; develop a set of best practices and baseline security standards that is risk- based and appropriate for different sizes and types of firms; improve the CSR questionnaire to make it more effective in capturing security practices and vulnerabilities of both small and large carriers; develop a deployment strategy to expand the Missouri pilot program to other carriers and other states; develop a statistically sound methodology for selecting companies for CSRs as it evaluates the commercial vehicle industry nationwide by conducting a random sample of motor carriers; work with FMCSA to leverage each other’s resources and possibly merge security inspection programs; and develop a CSR Web portal to provide a more tailored CSR questionnaire to address different industry sector security needs. Two years after these recommendations were made, TSA has taken limited steps to implement them, although officials stated that they were continuing to review the recommendations. As a result, the agency cannot ensure that its CSR efforts will fully identify sector vulnerabilities. Standards for internal controls in the federal government require that findings and deficiencies reported in audits and other reviews be promptly reviewed, resolved, and corrected within established time frames. The Missouri evaluation report’s recommendation that TSA develop a statistically sound methodology for selecting companies to review was consistent with TSA’s original goal that CSRs collect data that enable statistical analysis. In September 2008, TSA officials stated that they had worked out agreements with Michigan and Colorado to begin conducting CSRs in these states, beginning with training officers in October 2008. However, TSA did not have a plan in place or time frame for assessing industry-wide vulnerabilities. The lead official for risk assessment with TSA HMC stated that the agency would like to conduct a vulnerability assessment of a valid nationwide sample of the commercial vehicle industry, but that it lacked the resources to do so. TSA officials further stated that to further expand its CSR efforts, it has initiated a program to train Federal Security Director personnel (FSDs) at 3 airports to conduct CSRs on commercial vehicles in the airports’ surrounding areas. Officials told us that FSDs had completed 5 CSRs during fiscal year 2008. Without completing industry vulnerability assessments as required by HSPD-7 and the NIPP, TSA cannot complete an overall assessment of the industry security risks. For example, instead of assessing the vulnerabilities of the entire commercial vehicle sector, at the direction of TSA management, TSA HMC is currently focusing all of their CSR efforts on the hazardous materials transportation sector. However, TSA’s pilot study on Missouri firms found that hazardous materials transportation companies reviewed by the contractor performed much better than other companies in terms of implementing security measures to mitigate potential vulnerabilities. TSA has collected some relevant information necessary for estimating the impact of potential attacks involving the commercial vehicle sector, but has not conducted consequence assessments of potential terrorist attacks or leveraged the consequence assessment efforts of others. The DHS NIPP defines consequence assessment as the worst reasonable adverse impact of a successful terrorist attack. According to the NIPP, risk assessments should include consequence assessments to measure the negative effects on public health and safety, the economy, public confidence in institutions, and the functioning of government that can be expected if an asset, system, or network is damaged, destroyed, or disrupted by a terrorist attack. The TSA’s TSSP also requires that risk analysis include a consideration of consequences. Terrorism involving commercial vehicles can affect a broad range of targets, including not only trucks and buses, but also freight and passengers, terminals, truck stops, and rest areas. In addition to the commercial vehicle system being attacked, commercial vehicles can be used to attack other assets. When used as VBIEDs with explosives or fuel, for example, commercial vehicles can be used to target highway, buildings, and other critical infrastructure. A powerful truck bomb can destroy from a considerable distance. For example, Khobar Towers was attacked from 80 feet away (fig. 5). Truck VBIED attacks can also target large numbers of people, as was the case with the coordinated attack of several truck bombs in Northern Iraq on August 14, 2007, that killed approximately 500 people, or to assassinate individuals such the former Lebanese Prime Minister Rafik Hariri. Worldwide, buses have been the target of bombings---some involving suicide bombers---on numerous occasions, such as the attack on former Prime Minister Benazir Bhutto at a mass rally in Pakistan. TSA officials stated that they cannot conduct consequence assessments of the commercial vehicle sector because truck bombs can be used to attack most of the nation’s critical infrastructure. Accordingly, officials stated that the number of potential consequences of terrorist attacks is too great to practically assess. Although TSA has not conducted consequence assessments of the commercial vehicle sector, the agency has acquired data from the Bureau of Alcohol, Tobacco and Firearms (ATF) and the U.S. Army on evacuation distances for various-sized shipments of explosives and flammable substances, and PHMSA’s Emergency Response Guidebook for first responders to hazardous materials incidents that could be applied to future consequence assessments. TSA officials acknowledged that obtaining data on evacuation distances is only a first step in conducting consequence assessments. Evacuation distance provides one measure of the potential consequences of a terrorist attack by defining the danger zone surrounding an attack by a particular type and size of explosive or flammable materials. For example, according to U.S. Army data, the building evacuation distance for such a worst case scenario truck bomb would be a minimum of 1,570 feet, and the minimum outdoor evacuation of people would be 7000 feet. Using another example, a fireball from a fuel truck can threaten both structures and people; accordingly, ATF guidance suggests a minimum evacuation distance of 6,500 feet. In comparison, a tank truck of anhydrous ammonia, which represents 81 percent of Toxic Inhalation Hazard (TIH) shipments, has a smaller recommended standoff distance of 2,112 feet, and the recommended standoff distance for chlorine, which is the next most common form of Toxic Inhalation Hazard, is 3,168 feet. However, other guidance, such as the PHMSA’s Emergency Response Guidebook, provides different data based on initial isolation distances and much larger maximum nighttime protective action distances. TSA reported that it is working with various federal partners and industry stakeholders to establish a uniform and scientific assessment of potential consequences of VBIEDs and the discharge of TIH materials. Although TSA has not conducted consequence assessments of the commercial vehicle sector, OI officials stated that, in their judgment, the likely consequences of common VBIED attacks were greater than VBIED attacks using TIH materials because attempts to date to use VBIEDs to vaporize chlorine into a gaseous inhalation hazard have been largely unsuccessful, have caused little damage, and resulted in few casualties. On the other hand, according to officials, VBIEDs using a number of different explosives and incendiary materials have repeatedly been successfully used to kill people. TSA officials stated that the agency also has not leveraged DHS’s ongoing nationwide risk assessment efforts to obtain consequence information. For example, recognizing that each sector of our country’s critical infrastructure possesses its own unique characteristics, operating models, and risk landscape, pursuant to HSPD-7, the NIPP designates 18 critical infrastructure sectors and the agencies responsible for each of the sectors to work with DHS to implement a risk management framework for the sector and develop protective programs. Each of the 18 sectors has issued Sector Annual Reports (SARs) of their risk management activities, including consequence assessments, which HMC could draw upon to support the assessment of VBIED and hazardous materials consequences for other critical infrastructure sectors. For example, the 2007 sector annual reports identified the following for select sectors: Commercial Nuclear Power Sector: The Department of Energy employs a Comprehensive Review Program to analyze facilities that it considers potential terrorist targets. The Nuclear Sector Annual Report indicated that as of May 2007, reviews had been completed of the vulnerabilities and potential consequences of an attack on 52 of 65 commercial nuclear reactors. Dams Sector: The 2007 Dams Sector Annual Report identified that all security measures were in place at 152 of 254 Army Corps of Engineers dams, and the Federal Energy Regulatory Commission reported having completed risk assessments on its 1,200 most security-sensitive dams. The report also called for improved blast-damage estimates for VBIEDs on certain dams and levees that are potential targets for terrorist attacks. The Chemical Sector: The 2007 Chemical Sector Annual Report, which was based in part on industry risk assessments, identified that VBIEDs are a particular concern because of their portability, size, and potential to cause grave damage. In addition, DHS’s 2007 Strategic Homeland Infrastructure Risk Assessment (SHIRA) assessed the highest risk scenarios targeting the nation’s 18 critical infrastructure/key resources sectors, and highlighted attack methods with cross-sector implications. The SHIRA used threat assessments from the intelligence community and vulnerability and consequence assessments from the SSAs to identify the attack methods that pose the highest risk to the respective sectors. TSA HMC could use the SHIRA data to identify which sectors are most at risk from VBIEDs and hazardous materials and then coordinate with those SSAs on their vulnerability and consequence assessment efforts. TSA HMC could also use a variety of other relevant assessments to obtain consequence information. These include the agency’s Aviation Domain Risk Assessment which also considers consequences for a wide range of attack scenarios including VBIEDs, the Department of Energy’s risk assessments of nuclear weapons facilities, and the Nuclear Regulatory Commission’s assessments of commercial nuclear power plants. Similar information is also available from the Federal Risk Assessment Working Group, a federal risk assessment information clearinghouse that shares information about completed and ongoing risk assessments through regular meetings and a Web portal. TSA did not comment on why it has not developed a plan for completing consequence assessments, or why it was not leveraging the analysis of potential consequences included in these risk assessments. As discussed earlier in this report, TSA has identified one of its strategic goals as taking an inventory of the security status of the nation’s highway and motor carrier systems, but it has not developed a plan or a time frame for completing a risk assessment of the commercial vehicle sector. Based on general guidance in the NIPP, the TSSP states that TSA’s plan for risk assessment should use a combination of both expert and field-level risk assessment techniques to guide its risk management efforts. Expert risk assessments are based on national risk priorities and strategic risk objectives, scenario analyses and the expert judgment of agency officials, national assessments, and annual threat assessments. Field-level risk assessments include state and local assessments, and field inspections such as TSA’s CSRs and DOT Security Contact Reviews (SCRs). Expert assessments and field assessments have the same goal of identifying where the greatest risk mitigation measures are needed. As previously discussed, TSA is conducting nine high-level scenarios related to commercial vehicles, and has contracted to have more threat scenarios conducted to assess commercial trucking security risks in response to a mandate in the 9/11 Commission Act. While these expert assessments, if implemented effectively, should give TSA insights into the security risks of the industry, they will likely provide limited information on what sectors or companies are most at risk and what mitigation practices are currently in place, unless they are further supported by field- level risk assessments consistent with the TSSP. As stated previously, TSA is in the early stages of conducting CSRs and the majority of CSRs have to date been conducted in a single state, Missouri. Although TSA is working to expand both its threat scenarios and CSRs, progress to date has been limited. TSA also has not reported on the scope and method of risk assessments required for the commercial vehicle sector. Specifically, it has not reported what mix of expert and field-level risk assessments it intends to use and how it plans to integrate the two. Standard practices in program and project management include developing a road map, or a program plan, to achieve programmatic results within a specified time frame or milestones. TSA officials recognize that the agency needs more complete and accurate risk assessment information to inform its security strategy. However, TSA has not developed a plan or a time frame for completing a risk assessment of the commercial vehicle sector, including the level of resources required to complete the assessment and the appropriate scope of the assessment including determining the combination of threat scenarios and field-level vulnerability assessments it intends to use. The NIPP requires that it and the TSSP be reviewed and undergo periodic interim updates as required, and reviewed and reissued every 3 years or more frequently as needed and directed by the Secretary of Homeland Security. Accordingly, the TSSP states that it will undergo periodic updates and eventually align with the NIPP triennial update cycle. The Highway Infrastructure and Motor Carrier Modal Annex also states that the Government Coordination Council (GCC) and SCC are to submit revisions to the annex on an annual basis, and the GCC and SCC are to conduct a complete revision of the annex every 3 years. HMC began its revision process by updating the TSSP Highway Infrastructure and Motorcarrier Annex in 2008 to allow time for the revised strategy to be reviewed by the GCC, SCC, and various working groups and will submit it for review by the third quarter of 2009. The quality of this and future revisions of the annex will depend in large measure on the progress of risk assessments of the commercial vehicle sector and their utilization by TSA managers to inform their risk mitigation efforts. HMC officials stated that without complete risk assessments, they were directed by TSA and DHS leadership to base their strategy for securing the commercial vehicle sector on an examination of the security risks posed by the shipment of hazardous materials. However, agency officials could not identify why TSA and DHS leadership made this distinction, and the rationale for this directive is unclear. HMC officials also cited several additional reasons for focusing their security efforts on commercial vehicles transporting hazardous materials, including the professional judgment of its staff in the motor carrier industry; risk assessments TSA conducted for other transportation sectors, particularly rail; and legislative requirements, in particular the USA PATRIOT Act. However, the applicability of rail risk assessments to highways is unclear because VBIEDs trucks can directly access and attack most buildings in the United States, whereas rail cannot. Rail shipments also typically ship freight, including Toxic Inhalation Hazards, in far larger quantities than can be carried on a truck. Regarding congressional direction, the USA PATRIOT Act required TSA to perform a background check for all applicants for an endorsement of their commercial driver’s licenses to allow them to carry hazardous materials, but did not direct TSA to focus its commercial vehicle security efforts on hazardous materials. Moreover, available risk assessment information suggests alternatives or additions to the agency’s current focus on commercial vehicle transport of hazardous materials. TSA OI officials have consistently reported that VBIEDs are a greater threat to the United States than hazardous materials, including Toxic Inhalation Hazards. In addition, the evaluation of the Missouri CSR found that truck companies that transport hazardous materials stood out from other truck companies as having implemented most of TSA’s security procedures, and concluded that hazardous materials transporting companies were leaders related to the commercial vehicle sector. In addition, in October 2007 DHS Secretary Chertoff stated that IEDs remained a terrorist weapon of choice since they were easy to make, difficult to defend against, and could cause untold destruction. TSA OI officials stated that they continue to regard common VBIEDs as a greater threat than attacks using hazardous materials such as chlorine. Evacuation data also suggest that VBIEDs can have potentially broader impact than trucks carrying many forms of Toxic Inhalation Hazards. Without an existing strategy that is based on available risk assessment information, TSA cannot be assured that its current approach, which is focused on hazardous materials, is aligned with the highest priority security needs of the commercial vehicle sector. Key government and industry stakeholders have taken actions to strengthen the security of the commercial vehicles sector, but TSA has not assessed the effectiveness of its actions. At the federal level, DHS and DOT have implemented a number of programs designed to strengthen commercial vehicle security, particularly programs for the protection of hazardous materials. States, individually and collectively, through their state transportation and law enforcement associations, have also worked to strengthen the security of commercial vehicles. In addition, most of the private truck and motor coach industry associations we contacted stated that they were assisting their members in strengthening security by providing those members with guidance on best practices. TSA also contracted for an evaluation of the Missouri pilot CSRs that found the industry security practices were not extensive, but noted that the sample of firms in the pilot was not representative of the entire industry. Our site visits to 26 commercial truck and bus companies found that most had implemented basic security measures, including some form of personnel security and background checks, terminal security, locks and access controls, trailer seals, and communications and tracking equipment. TSA has begun developing output-based performance measures to gauge progress on achieving milestones and other program activities for its security programs, but the agency has not developed measures and data to monitor outcomes, that is, the extent to which these programs have mitigated security risks and strengthened commercial vehicle security. The TSSP identifies that performance measures of strategic goals and objectives should be outcome-based, but notes that interim output measures may be used during the early years of the program when baseline data on the program’s performance are being acquired. Without more complete performance measures, TSA will be limited in assessing the effectiveness of federal commercial vehicle security programs. TSA officials agreed that opportunities exist to develop outcome-based performance measures for its commercial vehicle security programs, and stated that they would like to do so in the future. A variety of federal programs have been implemented to enhance the security of the commercial vehicle sector. Several of these programs have been implemented by TSA and other DHS components, others by DOT, and several jointly by DHS and DOT. Overall, these programs are designed to assess commercial vehicle industry security risks, develop guidance on how to prevent and deter attacks, improve security planning for an effective response to a potential terrorist attack, enhance cost-effective risk mitigation efforts, and support research on commercial vehicle security technology. States, both individually and as members of transportation alliances with other states, have expanded their activities to secure the commercial vehicle sector as a part of broader homeland security activities. In addition, many commercial vehicle companies receive guidance on security awareness and best practices from industry associations. According to TSA’s pilot study of CSRs in Missouri, except for firms transporting hazardous materials, most commercial vehicle companies have implemented a limited number of security measures. In addition to CSRs, TSA and other DHS components have a number of programs underway designed to strengthen the security of commercial vehicles: the Truck Security Grant Program (TSP), the Intercity Bus Security Grant Program, Security Action Items (SAIs), and Hazardous Materials Driver Background Check Program. The TSP provides grants that fund programs to train and support drivers, commercial vehicle firms, and other members of the commercial vehicle industry in how to detect and report security threats, and how to avoid becoming a target of terrorist activity. TSP is administered by DHS’s Federal Emergency Management Agency’s Grant Programs Directorate. From fiscal years 2004 through 2008, the principal activity funded by the TSP was the American Trucking Associations’ Highway Watch Program, which provided drivers with security awareness training and support. In May 2008, however, a new grantee was selected. DHS also established an Intercity Bus Security Grant Program to distribute grant money to eligible stakeholders for protecting intercity bus systems and the traveling public from terrorism. Current priorities focus on enhanced planning, passenger and baggage screening programs, facility security enhancements, vehicle and driver protection, and training and exercises. In addition, TSA is consulting with industry stakeholders and PHMSA to develop SAIs, or voluntary security practices and standards, intended to improve security for trucks carrying security-sensitive hazardous materials. The SAIs are intended to allow TSA to communicate the key elements of effective transportation security to the industry as voluntary practices, and TSA will use CSRs to gauge whether voluntary practices are sufficient or if regulation is needed. TSA released its voluntary SAIs for hazardous materials carriers in June 2008. For example, it recommended using team drivers for shipments of the most security sensitive explosives, toxic inhalation hazards, poisons, and radioactive materials. The USA PATRIOT Act passed in October 2001 prohibited states from issuing Hazardous Materials Endorsements (HME) for a commercial driver’s license to anyone not successfully completing a background check. In response, DHS developed rules regarding how the background checks will be conducted and implemented a hazardous materials driver background check assessment program to determine whether a driver poses a security risk. We have previously reported on the problem of drivers who have job-hopped to circumvent the drug testing results associated with background checks, including hazardous materials drivers. As of October 2008, TSA had completed background checks for 990,961 out of approximately 2.7 million hazardous materials drivers, and 8,699 applicants have been denied HMEs since the beginning of the program. In addition to DHS, at the federal level, DOT has several commercial vehicle security programs underway: Security Contact Reviews (SCR), Security Sensitivity Visits (SSV), and the Hazardous Materials Safety Permit Program. FMCSA conducts SCRs, or compliance reviews, of commercial vehicle firms carrying hazardous materials. PHMSA regulations require shippers and carriers of certain hazardous materials to develop and implement security plans. At a minimum, these plans must address personnel, access, and enroute security. FMCSA SCRs review company security plans as part of ongoing safety inspections. FMCSA also conducts SSVs, or educational security discussions, with carriers of small amounts of hazardous materials that do not require posting hazardous materials placards on their trucks. As of September 2008, FMCSA had conducted 7,802 SCRs and 13,411 SSVs since the inception of the programs. Federal law also directed DOT to implement the Hazardous Materials Safety Permit Program to produce a safe and secure environment to transport certain types of hazardous materials. The Hazardous Materials Safety Permit Program requires certain motor carriers to maintain a security program and establish a system of enroute communication. In addition to CSRs, TSA and DOT also work collaboratively on several projects involving the security of commercial vehicles, including FMCSA and TSA research and development efforts for commercial vehicle security technologies. Both FMCSA and TSA have also completed pilot studies of tracking systems for commercial trucks carrying hazardous materials. For example, FMCSA completed a study of existing technologies in December 2004 evaluating wireless communications systems, including global positioning satellite tracking and other technologies that allow companies to monitor the location of their trucks and buses. TSA is testing tracking and identification systems, theft detection and alert systems, motor vehicle disabling systems, and systems to prevent unauthorized operation of trucks and unauthorized access to their cargos. The 9/11 Commission Act requires that DHS provide a report to Congress by August 2008, that includes, among other things, assessments of (1) the economic impact that security upgrades of trucks, truck equipment, or truck facilities may have on the trucking industry, including independent owner-operators; (2) ongoing research by public and private entities and the need for additional research on truck security; and (3) the current status of secure truck parking. TSA officials stated that they are working on developing this report but have not completed it. The 9/11 Commission Act also required that DHS develop a tracking program for motor carrier shipments of hazardous materials by February 2008. TSA officials reported that they worked with DOT and implemented a program to facilitate truck tracking in January 2008. However, TSA stated that while the 9/11 Commission Act mandated the tracking program and authorized $21 million over 3 years for its activities, it was never implemented because no funds were appropriated for the program. The 9/11 Commission Act also had a number of mandates regarding the security of over-the-road buses, including that DHS issue regulations by February 2008 requiring all over-the-road bus operators to develop and implement security training programs for frontline employees, and that DHS establish a security exercise program for over-the-road bus transportation. The 9/11 Commission Act further requires that DHS issue regulations by February 2009 requiring high-risk over-the-road bus operators to conduct vulnerability assessments and develop and implement security plans. TSA officials stated that they were preparing a Notice of Proposed Rulemaking that, if finalized, would require high-risk, over-the-road bus operators to conduct vulnerability assessments, and develop security plans and training plans. States are responsible for securing highway infrastructure, including highways, bridges, and tunnels, and for ensuring the security and safety of these roadways. State officials work on security issues within their individual states and with other states through several national associations. State transportation officials— through the American Association of State Highway and Transportation Officials (AASHTO)— and state law enforcement officials— through the Commercial Vehicle Safety Alliance (CVSA)— have worked collectively to strengthen the security of commercial vehicles and highway infrastructure through various expert committees and the implementation of joint initiatives with TSA and DOT. AASHTO formed a Special Committee on Transportation Security that has sponsored highway and commercial vehicle security research at the National Academies of Science. AASHTO also conducts surveys of state DOT security efforts, priorities, and identified needs. AASHTO’s August 2007 survey found that many state departments of transportation still needed basic training on integrating homeland security considerations in the planning process; detecting, deterring, and mitigating homeland security threats; and assessing transportation network homeland security vulnerabilities and risks. CVSA’s state law enforcement members have also organized committees on Transportation Security, Information Systems, Intelligent Transportation Systems, Hazardous Materials, Passenger Carrier, and Training to pool and provide expertise to promote best practices, new programs, and the consistent application of regulations. For example, the purpose of the CVSA’s Transportation Security Committee is to enhance homeland security by providing a forum to identify, develop, implement, and evaluate education, enforcement, and information-sharing strategies for enhancing commercial motor vehicle security. CVSA’s Program Initiatives committee originated the idea of conducting a CSR pilot in Missouri. We interviewed transportation, law enforcement, and homeland security officials responsible for commercial vehicle security from eight states to determine the nature and extent of their security efforts. These officials stated that they generally focused on law enforcement, protection of highway infrastructure, conducting inspections of commercial vehicles, and monitoring threats of all kinds. Officials in each state stated that they understood the major transportation security risks in their state. For example, officials from one state that has numerous chemical plants expressed particular concern about the shipment of these chemicals, while officials from another state with extensive military bases expressed concern about shipments of nuclear weapons and waste. Officials from yet another state with numerous explosives plants were more concerned about the transportation of explosives. State and local authorities have also created 58 fusion centers around the country to blend relevant law enforcement and intelligence information analysis and coordinate federal, state, and local security measures in order to reduce threats in local communities. DHS analysts work with state and local authorities at fusion centers to facilitate the two-way flow of information on all types of hazards. DHS has provided staff and more than $254 million to state and local governments to support these centers and facilitate the two-way flow of information between DHS and the states. Although states have a number of security efforts involving the commercial vehicle sector, none of the state officials whom we interviewed (with the exception of those from Missouri) reported conducting formal vulnerability assessments of the commercial vehicle sector in their states. Industry associations we interviewed were actively assisting their members in strengthening the security of the commercial vehicle sector. We met with 12 of the industry associations representing the commercial vehicle industry, including trucking, motor coaches, shipping, and unions, 9 of which were members of TSA’s SCC. TSA relies on the SCC and its industry association members to facilitate communications between the agency and the commercial vehicle industry, and to assist in the development of sector strategies, plans, and policies. Eight of these industry associations reported that they regularly provided federal officials with their industry’s perspective on proposed regulations and legislation. Additionally, 8 of the 12 associations reported that they were proactively providing security guidance to their members, which included guidance on security best practices, security awareness, and security self- assessments. In addition, about a third of the associations we reviewed reported providing training, security bulletins, and 24-hour hotlines for their members. TSA supports several of these industry initiatives, including working with trade associations to develop and distribute security brochures for their members. As discussed earlier in this report, the Missouri CSR Pilot evaluation showed that firms carrying hazardous materials were complying with regulations and implementing more security measures to mitigate their risks than other commercial vehicle firms. In contrast, the study further found that truck companies not transporting hazardous materials were implementing few of TSA’s best security practices. During our site visits to 20 truck and 6 bus companies, ranging in size from the nation’s largest commercial vehicle company with 27,453 trucks to an owner-operator with a single truck, we found that most had some form of personnel security procedures and background checks in place, as well as terminal security, communications systems, and truck tracking systems. Overall, the types of security practices among the commercial trucking companies we visited were similar, but the prevalence and sophistication of these practices varied. The range of security practices that companies were using included requiring drivers to lock doors and inspect cargo; cargo seals; driver background checks; vehicle tracking technology; terminal fencing, cameras, and gates; access controls, such as employee identification badges, sign-in and sign-out sheets, or electronic key cards; en route security measures; and driver training. Large corporations and small one-truck owner-operators generally used differently scaled security approaches to the same problem. For example, while a cell phone can suffice for the communications needs of a small operator, a large company may invest in integrated communications and tracking technologies. Conversely, where a large company may have a well-lit, gated terminal monitored by security cameras and guards, a small operator may lock the door of the vehicle and have a watch dog on the premises. In another example, small, independent owner-operator firms may rely solely on emergency responders such as 911 and state patrol hotlines, while larger firms may have dispatchers and in-house security specialists on duty 24 hours a day. TSA has begun developing measures that gauge the completion of its program activities, but could improve its efforts by collecting data that would measure the effectiveness of its programs in strengthening commercial vehicle security. Performance measures are indicators, statistics, or metrics used to gauge program performance. Output measures summarize the direct products and services delivered by a program, while outcome measures try to gauge the results of products and services delivered by a program. TSA has begun developing and using performance measures to assess the progress of commercial vehicle security programs, but does not have outcome data to monitor how effectively its programs are achieving their intended purpose, as suggested by GPRA. The TSSP also states that performance measures of strategic goals and objectives should be outcome-based, but notes that interim output measures may be used during the early years of the program while baseline data on the program’s performance are being acquired. The TSSP also requires that TSA form a Performance Measurement Joint Working Group to recommend the appropriate mix of output and outcome measures for agency programs, outcome monitoring techniques, and standardize measures across transportation sectors. As of August 2008, TSA had formed the transportation sectorwide working group, and according to officials the group was instrumental in developing and reporting on the transportation sector’s core, programmatic, and partnership metrics required by the NIPP. However, the joint measurement group for the highway and motor carrier sector had not been formed to develop outcome measures for commercial vehicle security programs. Currently, TSA HMC collects performance data on its own programs, while other commercial vehicle security programs are monitored by other DHS or DOT components. At our suggestion, TSA officials stated they plan to work out an agreement with DOT to receive performance measurement data for DOT security programs, stating that performance data for these programs are important and necessary for an overall view of the impact of federal security programs. TSA officials stated they would request that TSA and DOT share performance measurement data for commercial security programs as the DHS and DOT MOU is updated. The annex to improve coordination and data sharing between TSA and PHMSA was signed in October 2008. Table 2 summarizes the various federal commercial vehicle security programs and the agency responsible for administering the program and measuring its progress. TSA’s HMC established output measures for all five of its commercial vehicle security programs to assist the agency in gauging the performance of these programs. As of September 30, 2008, TSA reported that it had completed: 100 percent of the target goal of 24 CSRs per year, 100 percent of the SAI goal of developing voluntary guidelines to reduce risk and enhance the security of high-risk hazardous materials, 52 percent of hazardous materials driver’s license endorsement security threat assessment background checks, and 100 percent of the work in developing a pilot Truck Tracking Center. Output-based measures can be useful to TSA for program management purposes, as they can identify whether programs are producing a desired level of output and meeting established milestones. However, they do not measure TSA’s success in achieving the ultimate goal of enhancing the security of the commercial vehicle sector. For example, while TSA tracks the number of CSRs completed by its staff or as part of the Missouri CSR program, it has not attempted to measure the effect these programs are having. Missouri officials have suggested that a sample of firms that participated in the CSR program should be revisited to determine the extent to which their security-related practices improved after completing a CSR. Such information could provide TSA with a measure of the effectiveness of its key commercial vehicle security program. In January 2009, TSA stated that it was planning to conduct baseline and follow-on CSRs on hazardous material transporters to measure changes in preparedness. We recognize that TSA faces challenges in developing outcome measures to monitor and evaluate the effectiveness of its security programs that rely on the participation of many public and private entities. In addition, it can be difficult to develop performance measures to gauge the impact of a program in deterring terrorism. Nonetheless, outcome measures of programs designed to mitigate vulnerabilities and consequences are possible. For example, the domain awareness of drivers could be measured both before and after participating in the Trucking Security Grant program. Furthermore, as we have previously reported, a focus on results as envisioned by GPRA means that federal agencies are to look beyond their organizational boundaries and coordinate with other agencies to ensure that their efforts are aligned. The planning processes under GPRA provide a means for agencies to ensure that their goals for crosscutting programs complement those of other agencies; program strategies are mutually reinforcing; and, as appropriate, common or complementary performance measures are used. High-performing organizations use their performance management systems to strengthen accountability for results, specifically by placing greater emphasis on fostering the necessary collaboration both within and across organizational boundaries to achieve results. TSA officials agreed that opportunities exist to develop outcome performance measures for the agency’s commercial vehicle security programs, and stated that they would like to do so in the future. We previously reported that DHS often lacked the performance information to determine where to target program resources to improve performance, but was taking steps to strengthen their performance measures. GAO is currently working with DHS, including TSA, to provide input on the department’s performance measurement efforts based on our work at the department. While TSA has taken actions to improve coordination with federal, state, and industry stakeholders to strengthen commercial vehicle security, more can be done to ensure that these coordination efforts enhance security for the sector. Leading practices for collaborating agencies that we have previously identified offer suggestions for strengthening coordination with other public and private sector stakeholders. These key practices include, for example, defining common outcomes and complementary strategies; agreeing on roles and responsibilities; leveraging stakeholder resources; and developing mechanisms to monitor, evaluate, and report on the results of the collaborative effort. DHS and DOT signed an agreement that established broad areas of responsibility regarding the security of the transportation network, as we previously recommended. TSA supported the creation of an intergovernmental and industry council to gather feedback and input about security planning, among other efforts. TSA has made limited progress in leveraging FMCSA resources and resolving potentially duplicative security inspections, but in October 2008 signed an agreement to enhance coordination with FMCSA. Although TSA has successfully leveraged resources in the State of Missouri to conduct CSR vulnerability assessments, it has made limited progress in coordinating the expansion of CSRs to other states. Some state and industry officials we interviewed expressed concerns about TSA’s coordination and communication with the sector on developing a security strategy, and fully defining roles and responsibilities for the industry. Since many owner operators are hard to contact, some suggested that TSA enhance its Web site to better communicate directly with the industry’s many small operators. Moreover, the Missouri CSR pilot evaluation similarly suggested that TSA consider developing a two-way Web portal to allow firms to fill out CSR questionnaires. TSA officials stated that they have taken steps to interact with industry regarding the security of the sector, and have also leveraged industry expertise to strengthen security. However, TSA has not developed a means to monitor the effectiveness of its coordination actions with this very large and diverse sector. Without enhanced coordination, TSA will have difficulty expanding its vulnerability assessments to other states. DHS and DOT have taken actions toward coordinating their efforts to strengthen commercial vehicle security. In September 2004, DHS and DOT signed a MOU that established broad areas of responsibility for each department related to the security of the transportation sector, and specified roles and responsibilities to strengthen their cooperation and coordination. For instance, under the MOU, DOT recognized that DHS has primary responsibility for transportation security while it plays a supporting role, providing technical assistance and supporting DHS in the implementation of its security policies as allowed by DOT statutory authorities. Furthermore, the MOU states that DHS is to establish national transportation security performance goals and, to the extent practicable, appropriate security measures for each transportation sector to achieve an integrated national transportation security system. The MOU responds to our previous work which emphasized the need for greater coordination between DOT and DHS on transportation security efforts and recommended that the two departments establish an MOU to, among other things, delineate the roles, responsibilities, and funding authorities of the each department. In August 2006, TSA and PHMSA signed an annex to the DHS and DOT MOU, identifying their respective roles and responsibilities related to research and development, training, outreach, risk assessments, and technical assistance involving hazardous materials transportation security. According to this agreement, the parties commit themselves to seeking consensus on measures to reduce risk and minimize consequences of emergencies, sharing information that may concern the interests of the other party, and coordinating the development of transportation security- related guidelines. The annex further specified that TSA and PHMSA will, among other things: base security planning on risk, seek consensus concerning measures to reduce risk, and coordinate in the development of standards, regulations, guidelines, and directives; coordinate on observations and recommended security measures; explore opportunities for collaboration in inspection and enforcement share information during an emergency. Consistent with this agreement, PHMSA and TSA worked together to develop recommended security measures for hazardous materials carriers. As we have previously identified, an effectively implemented leveraging of stakeholder resources is a key practice for enhancing collaboration. According to leading practices for collaborating agencies, such parties bring different levels of resources and capacities to the collective effort; therefore, the parties should identify the types of resources necessary to initiate or sustain their collective effort, as well as assess each party’s relative strengths and limitations. In 2003, working with TSA, PHMSA established a set of security plan requirements for hazardous materials carriers that addressed the elements of en route security, unauthorized access, and personnel security. TSA later expanded upon PHMSA’s requirements and, in consultation with PHMSA, drafted a set of voluntary security standards, called Security Action Items (SAIs), specifying the level of security suggested for each type of security-sensitive hazardous materials, or hazardous materials transported by motor vehicles whose potential consequences from an act of terrorism may result in detrimental effects to the economy, communities, critical infrastructure, or individuals of the United States. TSA reported that these SAIs were finalized in June 2008 and distributed to stakeholders. TSA further worked with PHMSA to develop guidance on security-sensitive hazardous materials. TSA also established a GCC in April 2006 to monitor and evaluate the results of federal highway and motor carrier security programs, as required by the NIPP. We previously identified the need for collaborating agencies to create a mechanism to monitor and evaluate their efforts and to assist them in identifying areas for improvement. If implemented effectively, reporting on these collaborative activities can help key decision makers obtain feedback for improving both policy and operational effectiveness. The GCC consists of federal agencies and associations representing state and local transportation and law enforcement officials, and motor vehicle administrators with responsibilities directly related to commercial vehicle security. (For a complete list of GCC members, see app. VI). The GCC is intended to coordinate strategies, activities, and communications among its member entities, and establish policies, guidelines, metrics, and performance criteria. The highway sector GCC meets approximately once monthly, and both FMCSA and PHMSA officials expressed general satisfaction with the GCC. Although DHS and DOT have established agreements and developed complementary strategies to strengthen security of the commercial vehicles sector, gaps remain that hamper their ability to more effectively coordinate their efforts. Specifically, the two departments have not fully agreed on a strategy to leverage resources and eliminate potential duplication of effort and to share inspection information for monitoring security programs. TSA and FMCSA have shared roles and responsibilities regarding the enhancement of commercial vehicle security, but have different capabilities and resources. TSA HMC has a staff allocation of 19 FTEs. These staff are responsible for all aspects of commercial vehicle and highway infrastructure security including developing best practices, conducting risk assessments, and establishing policy. HMC is also responsible for school bus security. FMCSA has 650 to 700 staff deployed in the field nationwide to conduct inspections, enforce Federal Motor Carrier safety regulations and hazardous materials transportation safety and security regulations, and coordinate with state safety inspectors. Moreover, TSA and FMCSA have similar inspection programs, both of which are currently focused on hazardous materials transportation. As discussed earlier in this report, TSA operates the CSR program designed to review the security efforts and vulnerabilities of all types of commercial vehicle firms, and FMCSA conducts security compliance inspections (SCRs) of hazardous materials carriers. The 9/11 Commission Act requires that DOT consult with DHS to limit, to the extent practicable, duplicative reviews of the hazardous materials security plans. TSA and FMCSA officials stated that they have discussed how best to leverage FMCSA’s ongoing inspections programs and the feasibility of merging the two inspection programs. Officials reported that their interactions to date have focused on how best to take advantage of the similarities between these programs to more efficiently and effectively use agency resources, reduce potentially duplicative efforts, and minimize the burden on the industry. TSA officials stated that one obstacle to merging the two programs is that hazardous materials transportation companies are required to participate in FMCSA’s SCRs because they are subject to DOT’s hazardous materials regulations, while TSA’s CSRs are a voluntary effort. However, both agencies’ programs share voluntary and mandatory aspects. For example, along with SCRs, FMCSA also conducts Security Sensitivity Visits, which as discussed earlier in this report are voluntary, educational security reviews of firms carrying small amounts of hazardous materials. Moreover, TSA’s Missouri pilot successfully demonstrated that voluntary security reviews could be appended to mandatory safety reviews, and that state safety inspectors could be trained to conduct CSR security reviews. TSA officials further stated that the agency’s CSR reviews include a detailed assessment of the adequacy of security plans, whereas FMCSA reviews are intended to ensure a firm’s compliance with its written security plan, but are not an assessment of its adequacy. Another obstacle, according to TSA officials, is associated with how the two agencies view their missions and resource sharing. TSA believes utilizing FMCSA resources, infrastructure, and databases may be cost effective. However, DOT officials told us that the primary role of FMCSA’s inspectors is safety rather than security. One industry association we interviewed stated that they were working with FMCSA and TSA to merge their commercial vehicle security programs because association officials believed it would reduce duplication and be more efficient for both government and industry. By leveraging resources with FMCSA, TSA may be able to address other priorities, such as conducting additional vulnerability assessments, improving security mitigation programs beyond the hazardous materials sector, and addressing highway infrastructure protection. TSA and FMCSA also do not have a process in place to share information important to monitoring the results of security programs, consistent with leading practices for collaborating agencies. For example, the agencies are not comparing and contrasting their findings from commercial vehicle security inspections. Both TSA and FMCSA concurred that they could benefit from better sharing of information and have discussed developing a unified database for storing and sharing information on CSR and SCRs. Without a process in place to share information on the results of their security programs, TSA will not have a complete picture of the effectiveness of federal programs to secure the sector. FMCSA also maintains other data and information that could potentially be useful to TSA in its effort to understand and analyze the commercial trucking and motor coach industries. For example, the Missouri CSR program selected carriers with particularly bad safety records for review, but TSA does not have general, direct access to these data. FMCSA also maintains the Motor Carrier Management Information System (MCMIS) database of all interstate, and some intrastate companies, and all carriers of hazardous materials. Access to MCMIS data could assist TSA in addressing the NIPP requirement that the agency develop an inventory of assets as a basis for conducting vulnerability and consequence assessments. In addition, as TSA expands its CSRs of hazardous materials transporters, DOT may benefit from knowing which firms TSA has reviewed to avoid duplication of effort. Although TSA and PHMSA have signed an annex detailing how they will collaborate, TSA and FMCSA officials stated that they did not establish a similar agreement because the agencies coordinated with each other well, and an annex was not necessary. However, with enactment of the 9/11 Commission Act, TSA and FMCSA were required to complete an annex by August 2008 that defined the processes that will be used to promote communications and efficiency, and avoid duplication of effort. An annex to the MOU between TSA and FMCSA might help reduce possible duplication of effort in inspection programs, as well as facilitate the development of a process for sharing data to monitor program results. TSA and FMCSA officials signed an annex to the MOU in October 2008. The TSSP also requires that the GCC and the SCC create several joint working groups for research and development, performance measurement, intelligence, and risk. These groups are to improve coordination and prioritization of TSA’s research and development efforts, address the inherent difficulties in measuring and assessing the performance of security mitigation programs, develop sector-specific metrics, and coordinate and integrate intelligence efforts. However, the creation of these committees has been delayed, according to TSA officials. Without promptly developing joint working groups, TSA increases the risk that collaborative work and progress in these areas will be delayed. TSA officials stated that as of September 2008, the Joint Working Groups for Highway and Motor Carrier had not been officially approved. TSA has leveraged resources to enhance its capabilities to perform CSR vulnerability assessments through collaboration with the state of Missouri, and recently reached agreements with Michigan and Colorado to conduct CSRs, but has faced challenges in expanding this collaborative effort to other states. These state coordination challenges have the potential to significantly delay progress in expanding vulnerability assessments to other states. TSA officials stated that it was continuing to explore opportunities to expand the CSR program from Missouri to other states, and to leverage state field inspector and law enforcement resources. TSA also does not have a direct mechanism for coordinating its strategy with the states related to commercial vehicle security planning, and some state officials we spoke to expressed dissatisfaction with TSA’s coordination efforts. The agency relies on several GCC-member associations that represent state and local transportation and law enforcement officials to coordinate with states. However, all of these state GCC stakeholders identified concerns about the adequacy of TSA coordination efforts. For example, CVSA, which represents state law enforcement officials at the GCC, stated that the GCC is not an effective means of communication and coordination, and that direct communication with the states was minimal. As a result, CVSA transportation security officials stated that they were not fully informed about TSA’s risk management strategy. CVSA officials further stated, in September 2008, that while coordination with TSA had improved after TSA’s staffing stabilized, they continued to be concerned that the federal government was more engaged in helping states ensure safety rather than security. They also questioned whether TSA had dedicated sufficient resources to commercial vehicle security, or had the expertise to lead federal efforts to expand vulnerability assessments nationwide. CVSA officials stated that since DOT had the resources but not the authority to oversee commercial vehicle security, it is difficult for either agency to assist the states. Another key association, AASHTO, which represents state transportation officials at the GCC, stated that state security planners are given insufficient attention and information by TSA and other DHS components relating to security. Specifically, AASHTO officials stated that TSA had not communicated its strategy or initiatives to secure commercial vehicles, and that while AASHTO has tried to discuss what role the states play in transportation security with DHS and TSA, neither has been responsive in providing fully defined roles. Several officials we spoke with during our interviews with state DOTs also expressed concerns regarding whether the GCC is a sufficient mechanism for TSA to coordinate with the 50 states and were also critical of TSA’s leadership and communication related to commercial vehicle security. For example, one state noted that TSA’s slow pace in providing guidance was causing it to delay the implementation of its programs for fear such programs would conflict with TSA initiatives. TSA officials stated that the agency had coordinated with states to the extent possible with available resources—having one staff member responsible for federal, state, and industry coordination. TSA has made progress in involving industry in their strategy for strengthening commercial vehicle security by supporting the formation of an industry stakeholder council and through ongoing outreach efforts and meetings with industry officials. However, as discussed earlier in this report, industry officials we interviewed stated that they generally desired greater communication with TSA. More specifically, the officials noted that they did not fully understand TSA’s strategy for securing the commercial vehicle sector, or what roles and responsibilities the agency expected from industry. Additionally, TSA does not have any measures of the effectiveness of its efforts to coordinate with its many stakeholders, which limits its ability to determine whether its ongoing efforts to collaborate are appropriate and adequate for this very large and diverse transportation sector. Without strengthening communication and coordination with industry, TSA will not be able to fully leverage the resources of its stakeholders. Four of the leading practices for collaborating agencies we previously identified to help improve coordination among federal agencies could also be applied to improve federal collaboration with industry stakeholders—defining a common outcome and complementary strategies, agreeing on roles and responsibilities, leveraging stakeholder resources, and monitoring results. TSA coordinates with the commercial vehicles sector through an industry council and industry associations. To (1) overcome the challenge of working in partnership with such a large and diverse group of stakeholders, (2) understand the current security practices of these industries, and (3) gather industry input and feedback, TSA supported the creation of the Highway and Motor Carrier Sector Coordinating Council (SCC) in June 2006. The SCC represents three private industry groups: highway passenger and school bus carriers, highway freight carriers, and highway infrastructure owners and builders, and facilitates communications within the industry and between the industry and TSA. According to members, its purpose is to represent a broad cross-section of the industry, and there is no limit on the number of organizations that can participate. As of September 2008, the SCC had convened eight times since its first meeting in August 2006, and holds separate meetings to address issues requiring a quick response. Apart from the SCC, TSA has also collaborated with several industry trade associations to develop and distribute security brochures and guides for their membership. For example, TSA assisted the Truck Rental and Leasing Association in developing its Security Awareness and Self-Assessment Guide. Although TSA has made progress in coordinating with industry stakeholders, challenges remain. Specifically, SCC officials stated that the council was dissatisfied with TSA’s level of coordination with the SCC on the development of a strategy for enhancing commercial vehicle security. For example, the SCC leadership stated that the SCC was excluded from key stages of drafting revisions to the initial TSSP annex. The TSSP states that its initial goals and objectives would be developed by TSA, and be informed by comments and suggestions from the SCC, and going forward the TSSP annex states that the GCC and SCC are to prepare future revisions of the TSA strategy in the TSSP annex. SCC officials said that TSA did not consult with them regarding the development of key strategic objectives, known as Strategic Risk Objectives, or the Highway and Motor Carrier Annual Report regarding progress made and goals for the next year. These officials stated that overall coordination was better on trucking issues than for motor coach. Furthermore, industry and company officials we interviewed also expressed concerns about TSA’s coordination efforts regarding its strategy Specifically, officials from 9 of the 12 industry associations and 20 of the 26 truck and bus companies we interviewed, some of whom were also members of the SCC, stated that they were not familiar with TSA’s strategy and/or ongoing efforts to secure the commercial vehicle sector, and that TSA could strengthen its coordination with industry. Officials stated that in some cases, a lack of information led industry associations to hesitate in implementing security actions and dedicating resources to additional security measures that TSA may determine are not necessary or identify other required measures that must be implemented instead. Finally, SCC officials stated that TSA had not explicitly defined roles and responsibilities for the committee, its members, or the industry. Several industry association representatives also expressed similar confusion over their responsibilities and roles in securing the commercial vehicle sector. TSA officials stated that the SCC was not consulted in the development of the Highway and Motorcarrier Annex because TSA did not have enough time to include them. However, the SCC disagreed stating that TSA had received an extension on when the annex was due. TSA officials also said that they were not surprised by the uncertainty about their strategy for securing the sector because TSA’s focus has been largely on developing security programs rather than communicating its security strategy to industry. TSA officials stated that going forward, they will work with the SCC as it revises the Highway and Motor Carrier Annex to the TSSP. The SCC leadership stated that during the revision to the latest HMC annual report, TSA was much more open to SCC’s input. Our previous work on effective interagency collaboration has demonstrated that to achieve a common outcome, collaborating agencies need to establish strategies that work in concert with those of their partners or are joint in nature. Our prior work has further shown that collaboration can be enhanced when parties work together to define and agree on their respective roles and responsibilities, including how the collaborative effort will be led. Responsibility for securing the commercial vehicle sector involves collaboration between governmental and nongovernmental entities that typically have not worked together before on these issues. A fully defined outcome and strategy facilitates overcoming significant differences in organizational missions, cultures, and established ways of doing business. Without defining a common outcome and strategy, individual organizations increase the risk of developing strategies for securing the commercial vehicles industry that differ and conflict rather than help organizations better align their activities and resources to accomplish a common outcome. Fully defining and clarifying respective roles and responsibilities will be important to ensure that TSA and industry understand who will do what regarding securing the commercial vehicle sector, and help to reconcile differing perceptions of leadership that exist among stakeholders. SCC representatives stated that TSA has not maintained active communication with the committee, resulting in missed opportunities to take advantage of their potential contributions, including leveraging of their expertise and resources. TSA officials stated that given the SCC’s recent establishment, it may be too soon to fairly assess the effectiveness of their interactions with the council. Most companies we spoke with stated that they rarely heard from TSA if at all, although they were generally much more familiar with FMCSA with whom they have worked for years. Some company officials suggested that TSA develop a direct means of communicating with the industry, such as through e-mail or a robust Web page. The Missouri Pilot Program Evaluation Report also recommended that TSA develop a Web portal to improve coordination and communication with the industry. The lack of communications and coordination could limit the effectiveness of standards and measures meant to enhance the security of commercial vehicles. TSA officials stated that the agency has conducted outreach with private industry to, among other things, coordinate its overall strategy and roles and responsibilities. According to officials, TSA has made numerous resources available to private industry stakeholders through the Homeland Security Information Network and more recently through TSA’s Highway and Motor Carrier Web site link. Additionally, TSA reported that officials from the HMC are continually attending association conferences and workshops to educate and share TSA’s strategy, goals, and policies. To further improve communications, TSA reported that it has conducted 14 monthly conference calls since 2007 with attendees varying from 10 to 20 stakeholder participants. TSA officials stated that, while minor issues regarding specific lines of communication may have existed, in their opinion, the general level of coordination with the industry has been successful and that they were unaware of any significant private sector stakeholder misunderstandings of the agency’s security strategy, efforts, or their own roles and responsibilities. While TSA’s actions should help strengthen coordination with the commercial vehicle industry, the extent of any effect of these efforts is unknown because, according to TSA officials, the agency has not developed an approach to evaluate the effectiveness of its coordination efforts. Specifically, TSA does not have measures of how coordination efforts such as its current Web site, its participation in conferences, its efforts to coordinate with states, the GCC, and SCC result in a better understanding of TSA strategy and definitions of roles and responsibilities within the commercial vehicle sector. We have previously reported that collaborative efforts can be enhanced and sustained when they include mechanisms for monitoring and evaluation to assist stakeholders in identifying areas for improvement. Without such an evaluation, TSA will be hindered in determining whether its ongoing efforts to collaborate with the commercial vehicle industry are appropriate and effective for enhancing the security of this very large and diverse transportation sector. The nature, size, and complexity of the nation’s commercial vehicle sector highlights the need for federal and state governments and the private sector to work together to secure this transportation sector. The importance of the nation’s commercial trucking and motor coach industries and concerns about their security, coupled with finite homeland security resources, underscores the need for TSA to employ a risk management approach to prioritize its security efforts so that an appropriate balance between costs and security is obtained. TSA has taken steps in implementing a risk management approach by assessing threats to and from the commercial vehicle sector, conducting some vulnerability assessments, and initiating the development of best practices to secure the sector. Despite these achievements, much work remains to fully address the security risks of commercial trucks and motor coaches, and to ensure that this information is used to inform TSA’s security strategy. TSA has not yet completed annual threat assessments with estimations of the likelihood of various threats or tactics, nor established a plan and a time frame for completing vulnerability assessments of the commercial vehicle industry and its diverse sectors and firms, to include considering the recommendations of the Missouri Pilot Program Evaluation. TSA also has not developed a plan to conduct consequence assessments, or leveraged the consequence assessments of other sectors. Further, TSA has not determined the extent to which additional risk assessments are needed, or the resources needed to support these efforts. Although TSA is having threat scenarios conducted to inform a preliminary risk assessment of the industry, these assessments will likely provide limited information on what sectors or companies are most at risk, and what mitigation practices are currently in place, unless they are further supported by field-level risk assessments, such as CSRs, consistent with the TSSP. As a result of not having specific threat assessments or complete vulnerability and consequence assessments, the agency is limited in its ability to determine the most pressing security needs, and to use this information to guide its security strategy. While working to develop complete risk assessments, it is important that TSA assess and use available information as the basis for its interim decisions. For example, information currently available from existing threat, vulnerability, and consequence assessments suggest alternatives or additions to the agency’s current focus on commercial vehicle transport of hazardous materials. TSA has recently begun the process of revising its strategy for 2009 and beyond; however, without completed risk assessments, its revised strategy may not be appropriately targeted. Until TSA completes assessments of this very large and highly diverse transportation sector, and uses this information to inform its security strategy, it will be limited in its ability to assure Congress that existing funds are being spent in the most efficient and effective manner. TSA has developed a range of programs to strengthen truck and bus security, but has not developed outcome measures to assess how effectively the programs have improved security. Without such performance measures, TSA cannot monitor and evaluate whether or not these programs are achieving results in enhancing commercial vehicle security, nor communicate this progress to industry stakeholders, Congress, policymakers, and taxpayers. With 50 states and over a million diverse industry stakeholders, securing commercial vehicles can pose considerable communication challenges and lead to confusion about roles and responsibilities. Ultimately, the security of the industry is maintained by the companies themselves, and if TSA is to secure the sector it must do so by working with the industry. Coordination and communications techniques that might work well in other transportation sectors may be insufficient for the larger, more complex commercial vehicle industry. TSA has taken steps to coordinate with government and industry stakeholders, and has had some noteworthy successes such as the Missouri CSR program. However, both industry and state officials we interviewed stated that more needed to be done to enhance federal leadership and better ensure that federal, state, and industry actions and investments designed to enhance security are properly focused and prioritized. TSA communicates with states primarily through associations of state law enforcement and transportation officials who participate in the GCC. However, opportunities exist for more effective coordination with states to expand the Missouri CSR to other states, and for TSA to leverage FMCSA’s resources in conducting field inspections. TSA could address industry concerns about communication of its strategy, roles, and responsibilities, as well as better leverage industry expertise, by working more collaboratively with industry representatives and improving communication with the nation’s many small owner-operators and midsized firms. In addition, because TSA does not monitor and measure the effectiveness of its coordination and communications efforts, it cannot be sure that it is addressing stakeholder concerns. By improving coordination with DOT, the states, and the industry, TSA could build a solid foundation for strengthening the security of the commercial vehicle sector. To assist the Transportation Security Administration in more fully evaluating, selecting, and implementing commercial vehicle security risk mitigation activities, and to help strengthen the security of commercial vehicles in the United States and leverage the knowledge and practices employed by key federal and nonfederal stakeholders, we recommend that the Assistant Secretary for the Transportation Security Administration take the following four actions: 1. Establish a plan and a time frame for completing risk assessments of the commercial vehicle sector, and use this information to support future updates to the Transportation Sector Strategic Plan, to include conducting: to the extent feasible, assessments that include information about the likelihood of a terrorist attack method on a particular asset, system, or network as required by the National Infrastructure Protection Plan; a vulnerability assessment of the commercial vehicle sector, including: assessing the scope and method of assessments required to gauge the sector’s vulnerabilities; considering the findings and recommendations of the Missouri pilot evaluation report to strengthen future Corporate Security Reviews; and enhancing direct coordination with state governments to expand the Transportation Security Administration’s field inspection Corporate Security Review capacities; consequence assessments of the commercial vehicle sector, or developing alternative strategies to assess potential consequences of attacks, such as coordinating with other Sector-Specific Agencies to leverage their consequence assessment efforts. 2. In future updates to the Highway Infrastructure and Motor Carrier Annex to the Transportation Sector Security Plan, clarify the basis for the agency’s security strategy of focusing on the transportation of hazardous materials, the relative risk of vehicle-borne improvised explosive devices to the sector, and, based on the relative risk of these threats, any risk mitigation activities to be implemented to address them. 3. Develop outcome-based performance measures, to the extent possible, to assess the effectiveness of federal programs to enhance the security of the commercial vehicle sector. 4. Establish a process to strengthen coordination with the commercial vehicle industry, including ensuring that the roles and responsibilities of industry and government are fully defined and clearly communicated; new approaches to enhance communication are considered; and monitoring and assessing the effectiveness of its coordination efforts. We provided a draft of this report to DHS and DOT for review and comment. On January 15, 2009, DOT provided technical oral comments which we incorporated as appropriate. On February 6, 2009, we received written comments on the draft report from DHS, which are reproduced in full in appendix II. DHS concurred with our findings and recommendations and discussed efforts underway to address them. Regarding our recommendation that TSA establish a plan and a time frame for completing risk assessments of the commercial vehicle sector, and use this information to support future updates to the Transportation Sector Strategic Plan, DHS concurred and stated that TSA is actively conducting risk assessments of the major components of the commercial vehicle sector as required by the Implementing Recommendations of the 9/11 Commission Act of 2007, and provided a timetable for completing these scenario-based risk assessments. According to TSA, these assessments will examine specific scenarios involving the commercial vehicle sector and will include information on the likelihood of a terrorist attack. We are pleased that TSA is beginning to conduct risk scenario assessments on various parts of the industry. However, we continue to believe that TSA needs to expand its use of threat likelihood estimates to the extent feasible. For example, we believe that TSA should address the feasibility of annual sector threat assessments including likelihood estimates. TSA also stated that it is planning to conduct annual field-level vulnerability assessment CSRs on a statistically valid sample of hazardous materials carriers. While we support these efforts, as we noted in the report carriers transporting hazardous materials represent only a small fraction of the industry. Therefore, we believe that TSA should also assess the scope and method of its vulnerability assessments for the entire sector, beginning with establishing the mix of expert scenarios and field assessments it deems most appropriate. In response to our recommendation that TSA conduct consequence assessments of the commercial vehicle sector or develop alternative strategies to assess potential consequences of attacks such as coordinating with other sector-specific agencies to leverage their consequence assessment efforts, TSA concurred and stated that it will examine consequence information based on the scenarios that have been developed, consult with public and private sector subject matter experts, and, when appropriate, consult with sector-specific agencies. DHS concurred with our recommendation that in future updates to the Highway Infrastructure and Motor Carrier Annex to the Transportation Sector Security Plan, they should clarify the basis for the agency’s security strategy of focusing on the transportation of hazardous materials, the relative risk of vehicle-borne improvised explosive devices in the sector, and, based on the relative risk of these threats, any risk mitigation activities that should be implemented to address them. TSA stated that it intends to include risk-based clarification of the security strategies in future updates to the plan. According to TSA, for the past 2 years it has focused primarily on the transportation of hazardous materials. However, ongoing industry risk assessments and regulatory efforts may shift the current strategies, and communicating these strategies in the annex to all stakeholders will be critical to successful implementation of the plan. We believe that these efforts will help strengthen TSA’s strategy for securing the sector. We further believe that it will be important for TSA to clarify the basis for its strategy and any shift in that strategy based on assessments of the relative risks. DHS concurred with our recommendation that TSA develop, to the extent possible, outcome-based performance measures to assess the effectiveness of federal programs to enhance the security of the commercial vehicle sector. DHS stated that TSA recognizes the importance of establishing outcome-based performance measures and described ongoing efforts. TSA stated that it intends to conduct annual CSRs on hazardous materials motor carriers to measure changes in industry security. While these activities will help TSA strengthen its ability to assess the effectiveness of ongoing security measures, we believe that the impact of TSA’s programs on the progress of the rest of the commercial vehicle sector should be measured as well. DHS also concurred with our recommendation that TSA establish a process to strengthen coordination with the commercial vehicle industry, including ensuring that the roles and responsibilities of industry and government are fully defined and clearly communicated; new approaches to enhance communication are considered; and the effectiveness of its coordination efforts are monitored and assessed. DHS noted that TSA recognizes the importance of strong working relationships with both industry and other government agencies, and that through its work with coordination councils TSA has established a coordination process that continues to mature and develop. Finally, DHS noted that these coordination efforts are only 17 months old, hence performance measurement processes continue to be refined. We believe that given the size and complexity of the commercial vehicle sector, and the concerns expressed by various stakeholders, new approaches to enhance communication are important. As such, TSA should develop a process to monitor and assess the effectiveness of its coordination efforts. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to the Secretary of Homeland Security, the Secretary of the Department of Transportation, and other interested parties. This report will also be available at no charge on our Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-3404 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. The objectives of our review were to answer the following questions: (1) To what extent has TSA assessed the security risks associated with commercial vehicles and used this information to develop and implement a security strategy? (2) What security actions have key government and private sector stakeholders taken to mitigate identified risks to commercial vehicle security, and to what extent has TSA measured the effectiveness of its actions? (3) To what extent has TSA coordinated its strategy and efforts for securing commercial vehicles with other federal entities, states and private sector stakeholders? To review the extent to which the federal government has assessed security risks associated with commercial vehicles and used this information to develop and implement its security strategy, we analyzed DHS and DOT strategic and security planning documents such as the NIPP, the TSSP and its Highway and Motor Carrier Annex; performance documents including annual reports such as DHS’s 2008 Performance Budget Overview and TSA HMC’s Annual Reports and quarterly risk reduction reports; and risk assessment documentation—including assessments of threat, vulnerability, and standoff and evacuation distances. We interviewed officials from DHS National Protection and Programs Directorate; TSA’s Office of Highway and Motor Carriers, Office of Risk Management and Strategic Planning, Office of Intelligence, and Office of Cargo Policy; DOT’s Office of Intelligence and Security; PHMSA’s Office of Hazardous Materials Safety; FMCSA’s Office of Emergency Preparedness and Security; and DOT’s Bureau of Transportation Statistics. To assess TSA’s threat assessments, we analyzed its annual threat assessments and other intelligence products, and met with officials of TSA’s Office of Intelligence. We also assessed documentation and interviewed TSA’s HMC officials regarding the agency’s use of the threat assessments for planning its vulnerability and consequence assessments. We also met with TSA’s Risk Management Division and reviewed its use of estimates regarding the likelihood of certain types of specific threats for high-level NTSRA scenarios, and more systematic use of threat scenarios and likelihood estimates for the Aviation Domain Risk Assessment. To evaluate TSA’s vulnerability assessments, we reviewed TSA’s draft best practices, its vulnerability assessments known as Corporate Security Reviews (CSRs), and CSR questionnaires and reports. We also met with TSA HMC officials and interviewed officials from truck and bus companies that had undergone CSRs. To assess TSA’s CSR pilot program, we attended two Missouri Pilot CSRs and analyzed the TSA-sponsored evaluation report of the CSR pilot. At the conclusion of the two CSRs we observed, we interviewed company officials about what they learned from the CSR, how germane it was to their security needs, and how appropriate TSA’s suggested security measures were for their operating and business environment. We also met with Missouri state department of transportation and law enforcement officials and FMCSA field officers in Missouri to discuss their experiences with implementing the pilot and conducting CSRs. We also discussed the usefulness of the CSRs with officials from 12 leading industry trade associations representing the different sectors of the industry including, trucking companies, owner- operators, private truck companies, the bus industry, tank truck operators, hazardous materials shippers, rental and leasing firms, and unions. To review DOT’s SCR inspections of hazardous material security plan implementation, we reviewed the SCR questionnaire, gathered data from agency Performance and Accountability Reports regarding their annual progress, and met with DOT FMCSA’s Office of Emergency Preparedness and Security. We also analyzed FMCSA-sponsored vulnerability assessment of the U.S. motor coach Industry. We also reviewed the completeness of DOT MCMIS and BTS data on the population, or national inventory, of commercial vehicle firms, trucks, and drivers, because to determine industry vulnerabilities requires the development of a well- defined inventory or population of industry firms and assets. For more information, see appendix V. To evaluate TSA’s consequence assessments, we analyzed DHS, DOD, and ATF data about standoff distances for VBIED explosions, tanker fuel truck fireballs, and TIH evacuation distances. We also interviewed officials from TSA’s HMC and DHS’s National Protection and Programs Directorate about their consequence assessment efforts. To explore the feasibility of TSA leveraging the consequence efforts of other sectors, we also reviewed the 17 Critical Infrastructure Sector Annual Reports for 2006 and 2007, and the Strategic Homeland Infrastructure Risk Assessment report which identifies the sectors most at risk from VBIEDs. To determine how, if at all, TSA used its risk assessments to inform its strategy for securing commercial vehicles, we reviewed its strategic plan, the TSSP annex, annual reports, and other related documents. We also interviewed HMC officials, and compared their actions to DHS risk management guidance in the NIPP and TSSP. The quality of TSA’s CSR inspection data was previously assessed by the Missouri Pilot Evaluation. We reviewed the pilot evaluation and concurred with its conclusion that the Missouri sample was not representative of the commercial vehicle industry in Missouri or of the industry nationwide. To evaluate the extent to which TSA had a plan or a time frame to complete a comprehensive risk assessment of the commercial vehicle sector, we used standard practices in program and project management, which include developing a road map or a program plan to achieve programmatic results within a specified time frame or milestones. To evaluate TSA’s progress in addressing the Missouri CSR Pilot evaluation, we used GAO’s standards for internal controls in the federal government, which require that findings and deficiencies reported in audits and other reviews be promptly reviewed, resolved, and corrected within established time frames. To determine the actions the federal government and state and local governments have taken to mitigate commercial vehicle security risks, and the extent to which these actions are consistent with TSA’s security strategy, we reviewed documentation and interviewed officials from TSA’s Office of Highway and Motor Carrier and the Office of Cargo Policy; DOT PHMSA’s Office of Hazardous Materials Safety; FMCSA’s Office of Emergency Preparedness and Security; FHWA Transportation Security Office; and the FTA Office of Safety and Security. We also interviewed officials from eight states and conducted site visits to five. We selected the states in a nonprobability sample based on their characteristics, proximity to critical infrastructure and potential terrorist targets, such as large population centers, and the amount of hazardous materials (in tons) originating in the state. As a result, we cannot generalize the results to all states. However, we believe that observations obtained from these visits provided us with a greater understanding of the states’ operations and perspectives. We gathered information from each regarding their actions to mitigate security risks, and any challenges they face in strengthening security. To identify industry actions taken to secure the commercial vehicle sector, we analyzed TSA’s draft best practices and Security Action Items, and reviewed TSA CSR and FMCSA SCR and SSV inspection data. We also interviewed officials from 12 industry associations that represent trucking firms and truck drivers, truck manufacturers, truck rental and leasing companies, hazardous materials shippers, and intercity and tour bus companies to see what actions, if any, the association and its members were taking. We also reviewed security guidance industry trade associations had developed and provided to their members. To supplement what federal and industry associations told us and to observe industry operations firsthand, we also conducted site visits to 26 commercial truck and bus owner-operators. These companies were selected by a nonprobability sample based on: size, using the number of vehicles (tractors, or power units for trucking companies, and buses for motor coach companies) as an indicator; geographic location, noting the region’s characteristics, proximity to critical infrastructure and potential terrorist targets such as large population centers, and the amount of hazardous materials (in tons) originating in the state; and type of operations, using the quantity of hazardous materials transported as an indicator for trucking companies. Because we used a nonprobability sample of owner-operators and states, the information we obtained from these interviews and visits cannot be generalized to all commercial vehicle companies. However, we believe that observations obtained from these visits provided us with a greater understanding of the industry’s operations and perspectives. The 20 trucking companies we visited included hazardous materials carriers, nonhazardous materials carriers, and carriers that transported both hazardous materials and nonhazardous materials. The 6 motor coach companies we visited included companies that offer intercity services, and tour and charter services, as well as companies that do both. During our site visits to bus and trucking companies, we interviewed officials and inspected a range of security measures. To assess how the effectiveness of federal programs to reduce risk was being monitored, we analyzed DHS and DOT strategic planning and budgeting documents and performance data and interviewed officials from TSA’s HMC, the Transportation Sector Network Management Business Management Office, and the DHS Federal Emergency Management Agency’s (FEMA) Grants Program Directorate. To determine what performance measurement data DOT had developed that TSA could potentially use to monitor the progress of these commercial vehicle security programs, we interviewed officials from FMCSA’s Analysis Division and Strategic Planning and Program Evaluation Division. We also compared TSA’s efforts to evaluate its programs with guidance on performance measurement contained in the GPRA and the TSSP. To review the extent to which the federal government has coordinated its strategy for securing commercial vehicles internally and with private sector stakeholders, we analyzed DHS’s memorandum of understanding with DOT and subsequent annex with PHMSA that identifies the roles and responsibilities of DHS and DOT related to commercial vehicle and hazardous materials transportation security. In addition, we reviewed statutes related to DHS and DOT roles and responsibilities, as well as regulations and associated comments provided during rulemaking procedures for commercial vehicle security programs and requirements. We also interviewed officials from TSA’s Office of Intelligence, Risk Management Division, the Office of Highway and Motor Carrier, and the Office of Cargo Policy; and DOT’s PHMSA and FMCSA to obtain information on their current and planned efforts to secure commercial vehicles, as well as their collaborative efforts across agencies and with the private sector. We also interviewed members of the SCC and the private firms we visited to obtain their views regarding the effectiveness of TSA’s coordination efforts, and discussed their views with TSA officials. Finally, we compared TSA’s efforts to collaborate and coordinate with stakeholders to key practices that we had previously developed as leading practices of collaborating agencies. We conducted this performance audit from September 2006 through February 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. (The methodology used to gather our data on the incidents of truck and bus bombing is summarized in app. II). This appendix provides information on the analysis we conducted to determine the incidents of truck and bus bombings presented in this report. It provides information on the methodology used to identify incidents worldwide and the detailed results of our analysis. We used open sources, such as press and wire service reports, to determine the extent of bus and truck bombings. We first reviewed the general strengths and weakness of different open-source databases and consulted open source search experts. We reviewed eight databases and chose to use four based on the breadth and completeness of their media sources, years, and geographic coverage; whether they contained sufficient detail to verify that the event was a truck or bus bombing; and whether they allowed for independent verification of source information. We also wanted databases that had, or enabled, control methods to ensure minimization of false positives and duplicates, and standardized criteria for incident inclusion. We narrowed our selection of databases to the Open Source Center (OSC), Nexis, Global Terrorism Database (GTD), and Dialog databases. OSC is the official open-source clearinghouse for the U.S. government that monitors, translates, and disseminates within the U.S. government openly available news and information from non-U.S. media sources. It has state of the art language translation capabilities, so articles are usually translated into English by native-speaker linguists. Nexis, Major World Newspapers provides access to 5 billion searchable documents from more than 40,000 legal, news, and business sources. GTD is an open-source database gathering information on terrorist incidents around the world since 1970. We made limited use of the earlier, first version called GTD 1 and only for 1997 when we could corroborate the incidents it identified with additional sources found in Nexis. Our primary database was the more rigorous GTD2, which currently covers terrorism events from 1998 to 2004. GTD2 is based on the OSC and Nexis databases, which it evaluated as the best general databases. GTD2 entries have to be based on multiple independent open-source reports or a single “highly credible” source. GTD2 has a configurable definition of terrorism that includes more than one definition of the phenomenon; control methods in place to ensure minimization of false positives; a standardized criteria for incident inclusion that is documented in a formal and publicly available codebook; and a ranking system for media sources. Dialog is an online database that allows for an extensive search of a variety of databases and collections using powerful search language. Dialog’s ability to identify very specific information made it an ideal second source to search for additional documentation on known but not fully documented events. We then explored the capabilities of these databases over time with a small pilot, conducting searches on truck and bus bombings in one individual year in each of three decades, specifically the years 1987, 1996, and 2002, and explored which search terms and strategies produced the best results for each database. We assessed the possible threats to validity and confirmed that these were the pertinent issues with an open-source terrorism data expert. Our analysis plan addressed a variety of threats to validity and their mitigation: False positives – Unclassified data on terrorist events are largely gathered through open-source data, typically press reports. Since press reports may not be the most reliable, we used several databases that use reputable sources and decision rules for the inclusion of their entries. Entries we accepted had to be based on a highly reliable source, or multiple sources. Supporting articles had to directly confirm whether the incident was a truck or bus bombing as well as the incident date, location, and the number killed. History – Electronic search engines and archives have improved over time. Therefore, data across 25 years, since the 1983 Marine barracks bombing, may not be comparable. Based on our pilot data, we only included incidents from 1997, by which time both Nexis and GTD were well developed and reliable. Language - All languages may not be equally covered. GTD uses the Open Source Center which is based entirely on foreign sources and has strong translation capabilities among its staff. Synonyms - Multiple English terms may be used for bus, truck and bomb (e.g., bus vs. lorry). GTD uses extensive Boolean search terms with search strings using hundreds of terms and synonyms. Nexis and Dialog enable similar searches with wildcard strings. We applied GTD search strings to Nexis and Dialog to cover more current events not yet included in GTD. Geography - Some areas (e.g. Africa) may not be covered as well. However, we looked for a very particular type of incident that was highly likely to be the lead story where it occurred and picked up by the wires. Dates -Reporting date vs. actual dates. Reporting dates on global time can lead to confusion. GTD and OSC have date protocols to minimize date error. Since our unit of analysis is years, this error was of little risk. Breaking reports vs. “final” reports - Initial reports usually have less confirmation of the number killed. When conflicting reports cannot be reconciled, we used the lower number of reported killed. GTD also uses the lowest number. Incidents in a military area may not be terrorism - The GTD makes a distinction between combatants and noncombatants. We screened out events involving active combatants. However, we included incidents directed at civilians or other targets in active war zones such as Iraq and Afghanistan. Incident duplication – Using multiple sources could inadvertently lead to incident duplication. GTD has a protocol to eliminate duplicates and Nexis also enables electronic duplication vetting. In addition, duplications were screened manually and the entire dataset was verified by independent staff. We originally hoped to list the incidents since the Beirut bombings of 1983, but given the less rigorous methodology of GTD1, the limited archival coverage of Nexis prior to 1996, and the limitations of other databases we decided to drop 1983 through 1996. Due to the evolving coverage of these databases, we had to employ three different search strategies to cover the years from 1997 to 2007. Time period: 1997 Primary search database: Global Terrorism Database “GTD1” Secondary search database: Nexis’ Major World Newspapers By 1997, Nexis sources were sufficiently developed and available online to augment GTD1, which did not list supporting sources. Time period: 1998-2004 Primary search database: Global Terrorism Database “GTD2” GTD2 incorporates OSC and Nexis in a systematic manner and additional searches of these sources were not necessary. Time period: 2005-present Primary search database: Nexis’ Major World Newspapers Secondary search database: Individual newswires database in Dialog Third search database: Open Source Center For our study we searched the GTD2 for attacks utilizing or against a commercial vehicle, either truck, bus, or bus station or bus stand, specifically with explosives (VBIEDs, IED’s, suicide bomber(s), bombs, grenades, roadside bombs, landmines, and rockets). When searching Nexis we used the same search factors but with a Boolean search string. For years in our study outside the GTD year range, we duplicated their search and inclusion methodology. As a final check, we compared our results with Department of State and Department of Defense terrorism lists and timelines. We believe that these various steps successfully mitigated the various threats to validity and enabled us to compile information on the incidents of truck and bus bombings since 1997 with confidence. The results of our search are summarized in figure 2 and detailed in table 3 below. Some additional trends are summarized in the figures below. Truck and bus bombings are compared in figure 6, which shows that while bus bombings have historically been more common, the incidence of truck bombings has sharply increased since 2004 and peaked in 2007. Figure 7 summarizes how the sharp increase in bombing deaths in 2007 was due to the increase in truck bombings. We only counted incidents involving noncombatants, but most of the sharp rise in deaths in truck and bus bombings that occurred in 2007 was due to bombings in Iraq. American Chemistry Council (ACC) American Federation of Labor and Congress of Industrial Organizations (AFL-CIO) American Trucking Associations (ATA) Chlorine Institute (CI) International Brotherhood of Teamsters (IBT) National Private Truck Council (NPTC) National Tank Truck Carriers (NTTC) Owner-Operator Independent Drivers Association (OOIDA) Truck Manufacturers Association (TMA) Truck Rental and Leasing Association (TRALA) United Motorcoach Association (UMA) In addition to Corporate Security Reviews, TSA and DHS have four key programs designed to strengthen the security of the commercial vehicle industry. DOT also has four programs underway to strengthen commercial vehicle security and TSA and DOT are working collaboratively on several projects for securing commercial vehicles. Each of these programs and projects are discussed below. Trucking Security Program: The Trucking Security Program (TSP) provides grants that fund programs to train and support the members of the commercial vehicle industry in how to detect and report security threats, and how to avoid becoming a target for terrorist activity. TSP is administered by the Federal Emergency Management Agency’s Grant Programs Directorate within DHS. As of May 2008, DHS has provided nearly $78 million in TSP grants since 2003. Congress appropriated $16 million to fund this trucking security grant program for fiscal year 2008, and $8 million for fiscal year 2009. For fiscal years 2004-2008 the principal activity funded by the TSP was the American Trucking Associations’ Highway Watch program to improve security awareness in the commercial vehicle industry. In May 2008, however, a new grantee, the HMS Company of Alexandria, Virginia was selected. Security Action Items (SAIs): TSA consulted with DOT and industry stakeholders to develop SAIs, or voluntary security practices, intended to improve security for trucks carrying security-sensitive hazardous materials. TSA eventually plans to also develop SAIs for motor coaches and school buses. According to TSA officials, the SAIs will allow TSA to communicate the key elements of effective transportation security as voluntary practices; TSA officials will use CSRs to gauge whether voluntary practices are sufficient or if regulation is needed. Hazardous Materials Driver Background Check Program: A Hazardous Materials Endorsement (HME) authorizes an individual to transport hazardous materials for commerce. The USA PATRIOT Act, enacted in October 2001, prohibits states from issuing HMEs for a commercial driver’s license to applicants who have not successfully completed background checks. In response, TSA implemented the hazardous materials driver security threat assessment program which evaluates the hazardous materials driver’s criminal history, immigration status, mental capacity, and connection with terrorism to determine whether that driver poses a security risk. Intercity Bus Security Grant Program: This DHS program distributes grant money to eligible stakeholders to protect intercity bus systems and the traveling public from terrorism. Current priorities focus on enhanced security planning, passenger and baggage screening programs, facility security enhancements, vehicle and driver protection, as well as training and exercises. A total of $11.5 million was appropriated for fiscal year 2008 and $12 million for fiscal year 2009. A total of $11.5 million was appropriated for fiscal year 2008 and $12 million for fiscal year 2009. Security Plans and Training: DOT regulations require shippers and carriers of certain hazardous materials to develop and implement security plans. The regulations permit a company to implement a security plan tailored to its specific circumstances and operations. At a minimum, a security plan must address personnel, access, and en route security. All shippers and carriers must also ensure that employee training includes a security awareness component. In response to an industry petition that certain hazardous materials posing little or no security risk be removed from the list of hazardous materials for which security plans are required, DOT is reevaluating the security plan regulations. Security Contact Reviews (SCRs): Through its SCRs, FMCSA conducts compliance reviews of the security plans for hazardous materials transport required by DOT hazardous materials regulations. FMCSA conducts SCRs on all hazardous materials motor carriers that transport placardable amounts of hazardous materials. As of September, 2008, FMCSA had conducted 7,802 SCRs since the inception of the programs. Hazardous Materials Safety Permit Program: Federal law directed FMCSA to implement the hazardous materials permit program to produce a safe and secure environment to transport certain types of hazardous materials. The program requires certain motor carriers to maintain a security program and establish a system of enroute communication. This program uses the SCRs to collect data on motor carrier ability to secure hazardous materials. Sensitive Security Visits (SSVs): FMCSA conducts SSVs as educational security discussions with motor carriers that carry small amounts of hazardous materials that do not require posting hazardous materials placards on their trucks. These visits discuss best practices for hazardous materials transportation and provide informal suggestions for improvement. As of September, 2008, FMCSA had conducted 13,411 SSVs since the inception of the programs. TSA Missouri CSR Pilot: This pilot program conducts abbreviated CSRs of trucking and motor coach companies using state inspectors. For more details of the Missouri CSR program, see pages 26-31. FMCSA and TSA Truck Tracking Security Pilots: FMCSA and TSA have concluded hazardous materials truck-tracking pilots. FMCSA completed a study of existing technologies in December 2004, evaluating wireless communications systems, including global positioning satellite (GPS) tracking and other technologies that allow companies to monitor the location of their trucks and buses. TSA also tested tracking and identification systems, theft detection and alert systems, motor vehicle disabling systems, and systems to prevent unauthorized operation of trucks and unauthorized access to their cargos. The 9/11 Commission Act mandated that the Secretary develop a tracking program for motor carrier shipments of hazardous materials by February 2008. TSA officials reported that they worked with DOT to meet this mandate and completed a program to facilitate truck tracking on January 10, 2008. Hazardous Materials Research Involving Security Initiatives: DOT and DHS sponsor research on emerging technology that could potentially be used to enhance the safety and security of hazardous materials transportation. This research involves evaluation of potential truck- disabling technologies, radiation detection devices, hazardous materials routing, and software to assist in hazardous materials incident response. Additional Programs: DHS and TSA also have a number of smaller programs to augment motor carrier security and programs in the planning stages. TSA has several projects on screening applicants for Commercial Drivers Licenses (CDLs) and Hazardous Materials Endorsements on CDLs. These include the Universal CDL Vetting Project, which will assess the feasibility of implementing watch list checks of 9 million commercial driver records. Through the Rental Truck Vetting Operational Study and Analysis, TSA is assessing technologies to screen rental truck customers against the DHS and FBI Watch List. To address the lack of security- related domain awareness, TSA and DHS also have developed several projects: Federal Law Enforcement Training Center (FLETC) Roadside Law Enforcement Transportation Security Awareness, and the Hazmat Motor Carrier Security Self-Assessment Training Project which distributed security self-assessment training on CDs to approximately 75,000 hazardous materials motor carriers and shippers. Through the Commercial Truck Insurance Initiative, TSA is coordinating with insurance companies to develop methods and measures to provide companies incentives to improve security. DOT maintains data on carriers and commercial vehicles registered with DOT. However, the data on intrastate operations is incomplete and unreliable because FMCSA does not have authority to regulate intrastate operations that are not involved in the transport of hazardous materials. Firms that operate exclusively within a single state do not have to register with DOT unless they are in the 25 states that require all commercial vehicles to register with DOT, or transport hazardous materials. This means that DOT does not have data on approximately half the nation’s intrastate carriers. Second, firms frequently do not keep their registrations current, and as a result the currency and accuracy of DOT’s records are not assured and many of its registrations are inactive. “Inactive” means that carriers had no inspections, crashes, enforcement actions, compliance reviews, safety audits, or registration applications with DOT for 3 years. DOT does not know which firms have gone out of business and which have simply failed to maintain their registrations. These incomplete data on the population of commercial vehicle firms will present some additional challenges to TSA for conducting a truly representative sample of industry assessments. Glenn Davis and Robert White, Assistant Directors, and Dan Rodriguez and Jason Schwartz, Analysts-in-Charge worked with Cathleen Berrick to manage this assignment. Gary Malavenda made significant contributions to many aspects of the work. Tracey King provided legal and regulatory support. Shamia Woods analyzed federal, state, and industry actions. Jennifer Cooper analyzed TSA’s cooperation efforts. Elizabeth Curda provided assistance on performance measurement and collaboration. Anish Bhatt and Joanna Berry helped in the design, methodology, and pilot test of the incidents of bus and truck bombings. Colleen Candrl helped in the design and conducted the searches on the incidents of bus and truck bombings. Evan Gilman, Virginia Chanley, and Anna Maria Ortiz provided additional design and methodological support.
|
Numerous incidents around the world have highlighted the vulnerability of commercial vehicles to terrorist acts. Commercial vehicles include over 1 million highly diverse truck and intercity bus firms. Within the Department of Homeland Security (DHS), the Transportation Security Administration (TSA) has primary federal responsibility for ensuring the security of the commercial vehicle sector, while vehicle operators are responsible for implementing security measures for their firms. GAO was asked to examine: (1) the extent to which TSA has assessed security risks for commercial vehicles; (2) actions taken by key stakeholders to mitigate identified risks; and (3) TSA efforts to coordinate its security strategy with other federal, state, and private sector stakeholders. GAO reviewed TSA plans, assessments, and other documents; visited a nonrandom sample of 26 commercial truck and bus companies of varying sizes, locations, and types of operations; and interviewed TSA and other federal and state officials and industry representatives. TSA has taken actions to evaluate the security risks associated with the commercial vehicle sector, including assessing threats and initiating vulnerability assessments, but more work remains to fully gauge security risks. Risk assessment uses a combined analysis of threat, vulnerability, and consequence to estimate the likelihood of terrorist attacks and the severity of their impact. TSA conducted threat assessments of the commercial vehicle sector and has also cosponsored a vulnerability assessment pilot program in Missouri. However, TSA's threat assessments generally have not identified the likelihood of specific threats, as required by DHS policy. TSA has also not determined the scope, method, and time frame for completing vulnerability assessments of the commercial vehicle sector. In addition, TSA has not conducted consequence assessments, or leveraged the consequence assessments of other sectors. As a result of limitations with its threat, vulnerability, and consequence assessments, TSA cannot be sure that its approach for securing the commercial vehicle sector addresses the highest priority security needs. Moreover, TSA has not developed a plan or time frame to complete a risk assessment of the sector. Nor has TSA completed a report on commercial trucking security as required by the Implementing Recommendations of the 9/11 Commission Act (9/11 Commission Act). Key government and industry stakeholders have taken actions to strengthen the security of commercial vehicles, but TSA has not assessed the effectiveness of federal programs. TSA and the Department of Transportation (DOT) have implemented programs to strengthen security, particularly those emphasizing the protection of hazardous materials. States have also worked collaboratively to strengthen commercial vehicle security through their transportation and law enforcement officials' associations, and the establishment of fusion centers. TSA also has begun developing and using performance measures to monitor the progress of its program activities to secure the commercial vehicle sector, but has not developed measures to assess the effectiveness of these actions in mitigating security risks. Without such information, TSA will be limited in its ability to measure its success in enhancing commercial vehicle security. While TSA has also taken actions to improve coordination with federal, state, and industry stakeholders, more can be done to ensure that these coordination efforts enhance security for the sector. TSA signed joint agreements with DOT and supported the establishment of intergovernmental and industry councils to strengthen collaboration. TSA and DOT completed an agreement to avoid duplication of effort as required by the 9/11 Commission Act. However, some state and industry officials GAO interviewed reported that TSA had not clearly defined stakeholder roles and responsibilities consistent with leading practices for collaborating agencies. TSA has not developed a means to monitor and assess the effectiveness of its coordination efforts. Without enhanced coordination with the states, TSA will have difficulty expanding its vulnerability assessments.
|
Programs providing financial assistance to entrepreneurs are fragmented—which occurs when more than one agency or program is involved in the same broad area of national interest. Of the 30 financial assistance programs we reviewed, 16 can provide or guarantee loans that can be used for a broad range of purposes by existing businesses and nascent entrepreneurs in any industry. Examples of programs in this category include SBA’s 7(a) Loan Program and USDA’s Business and Industry Loans. Other programs can support loans for a more narrow range of purposes or industries or can only support other types of financial assistance, such as grants, equity investments, and surety guarantees. In addition, a number of programs overlap based on the characteristics of the targeted beneficiary. Entrepreneurs may fall into more than one beneficiary category—for example, an entrepreneur may be in an area that is both rural and economically distressed. Such entrepreneurs may be eligible for multiple subsets of financial assistance programs based on their specific characteristics. For example, a small business in a rural, economically distressed area, such as Bourbon County, Kansas, could in terms of authority, receive financial assistance in the form of guaranteed or direct loans for a broad range of uses through multiple programs at the four agencies, including Commerce’s Economic Adjustment Assistance; HUD’s Community Development Block Grant (CDBG)/States SBA’s 7(a) Loan Program and Small Business Investment Companies USDA’s Business and Industry Loans and Rural Business Enterprise Grants. While many programs overlap in terms of statutory authority, entrepreneurs may in reality have fewer options to access assistance from multiple programs. Agencies often rely on intermediaries (that is, third-party entities such as nonprofit organizations, higher education institutions, or local governments that use federal grants to provide eligible assistance directly to entrepreneurs) to provide specific support to entrepreneurs, and these intermediaries vary in terms of their location and the types of assistance they provide. Some programs distribute funding through multiple layers of intermediaries before it reaches entrepreneurs or may competitively award grants to multiple intermediaries working jointly in the same community to serve entrepreneurs. For example, Commerce’s Economic Adjustment Assistance program can provide grants to intermediaries, such as consortiums of local governments and nonprofits, which in turn provide technical or financial assistance to entrepreneurs. Although we identified a number of examples of statutory overlap, we did not find evidence of duplication among these programs (that is, instances when two or more agencies or programs are engaged in the same activities to provide the same services to the same beneficiaries) based on available data. However, as discussed later, most agencies were not able to provide the programmatic information, such as data on users of the program that is necessary to determine whether or not duplication actually exists among the programs. In our 2012 report, we examined entrepreneurs’ experiences with the four agencies’ technical assistance programs—which provide services such as helping with development of business plans or a loan package to obtain financing—and found that some struggle to navigate the fragmented programs. For example, some entrepreneurs and various technical assistance providers with whom we spoke—including agency field offices, intermediaries, and other local service providers—told us that the system can be confusing and that some entrepreneurs do not know what services are available or where to go for assistance. Technical assistance providers sometimes attempt to help entrepreneurs navigate the system by referring them to other programs, but these efforts are not consistently successful. In addition, programs’ Internet resources can also be difficult to navigate. Each agency has its own separate website that provides information to entrepreneurs, but they often direct entrepreneurs to other websites for additional information. SBA, Commerce, USDA, and other agencies have collaborated to develop a joint website, called BusinessUSA, with the goal of making it easier for businesses to access services. Some technical assistance providers and entrepreneurs we spoke with suggested that a single source to help entrepreneurs quickly find information instead of sorting through different websites would be helpful. Given the fragmented nature of the federal programs that provide financial assistance to entrepreneurs, enhanced collaboration between agencies could help improve program efficiency. In prior work we identified practices that can help to enhance and sustain collaboration among federal agencies, which can help to maximize performance and results, and we have recommended that the agencies follow them. These collaborative practices include identifying common outcomes, establishing joint strategies, leveraging resources, determining roles and responsibilities, and developing compatible policies and procedures. In addition, GPRAMA’s crosscutting framework requires that agencies collaborate in order to address issues, such as economic development, that transcend more than one agency, and GPRAMA directs agencies to describe how they are working with each other to achieve their program goals. While most of the agencies at the headquarters level have agreed to work together by signing formal agreements to administer some of their similar programs, they have not implemented a number of other good collaborative practices we have previously identified. For example, SBA and USDA entered into a formal agreement in April 2010 to coordinate their efforts aimed at supporting businesses in rural areas. USDA’s most recent survey of state directors indicates strong collaboration in several areas, including field offices advising borrowers of SBA’s programs, referring borrowers to SBA and its resource partners, and exploring ways to make USDA and SBA programs more complementary. However, the agencies have not implemented other good collaborative practices, such as establishing compatible policies and procedures to better support rural businesses. While the four agencies collect at least some information on program activities in either an electronic records system or through paper files, most were unable to summarize the information in a way that could be used to help administer the programs. Similarly, the agencies typically do not track detailed information on the characteristics of entrepreneurs that they serve, such as whether they are located in rural or economically distressed areas or the entrepreneurs’ type of industry. According to OMB, being able to track and measure specific program data can help agencies diagnose problems, identify drivers of future performance, evaluate risk, support collaboration, and inform follow-up actions. Analyses of patterns and anomalies in program information can also help agencies discover ways to achieve more value for the taxpayers’ money. In addition, agencies can use this information to assess whether their specific program activities are contributing as planned to the agency goals. Promising practices of program administration include a strong capacity to collect and analyze accurate, useful, and timely data. Table 1 summarizes the type of information that agencies maintain in a readily available format that could be tracked to help administer the financial assistance programs we reviewed. For example, USDA collects detailed information (19 categories) on how entrepreneurs use proceeds, such as for working capital, provided through five of its financial assistance programs. USDA maintains this information in an electronic database, and officials stated that they can provide this type of detailed information upon request. We also found that for fiscal year 2011, a number of programs that support entrepreneurs failed to meet some or all of their performance goals. GPRAMA requires agencies to develop annual performance plans that include performance goals for an agency’s program activities and accompanying performance measures. According to GPRAMA, these performance goals should be in a quantifiable and measurable form to define the level of performance to be achieved for program activities each year. Leading organizations recognize that performance measures can create powerful incentives to influence organizational and individual behavior. Some of their good practices include setting and measuring performance goals. Measuring performance allows organizations to track the progress they are making toward their goals and gives managers crucial information on which to base their organizational and management decisions. Further, from 2000 through 2012, the agencies had conducted program evaluations of 13 of the 30 financial assistance programs that support entrepreneurs we reviewed. Based on our review, we found that SBA has conducted program evaluation studies on 5 of its 10 programs. We also found that USDA has evaluated 1 of its 8 financial assistance programs, but the study did not address the extent to which the program was achieving its mission. Although GPRAMA does not require agencies to conduct formal program evaluations, it does require agencies to describe program evaluations that were used to establish or revise strategic goals, as well as program evaluations they plan to conduct in the future. Additionally, while not required to do so, agencies can use periodic program evaluations to complement ongoing performance measurement. Program evaluations that systematically study the benefits of programs may help identify the extent to which overlapping and fragmented programs are achieving their objectives. In addition, program evaluations can help agencies determine reasons why a performance goal was not met and give an agency direction on how to improve program performance. Since our August 2012 report we have also evaluated certain SBA financial assistance programs. For example, in September 2013 we reported on a pilot initiative within SBA’s 7(a) loan guarantee program, the Patriot Express Pilot Loan Program, which provided small businesses owned and operated by veterans and other eligible members of the military community access to capital. We found that SBA did not establish measurable goals for the pilot and did not evaluate the effects of this pilot, which would have allowed SBA to assess if program operations have resulted in the desired benefits, and, for pilots, determine whether to make the programs permanent. In this report, we made two additional recommendations pertaining to program evaluation. SBA said it would consider the findings as it reviewed extending the pilot program. Subsequently, SBA discontinued the Patriot Express Pilot Program as of December 31, 2013, but announced a temporary program, the SBA Veterans Advantage Program, to serve veteran-owned small businesses. To address the issues identified in our August 2012 report and to help improve the efficiency and effectiveness of federal efforts to support entrepreneurs, we made the following recommendations: The Director of the Office and Management and Budget; the Secretaries of the Departments of Agriculture, Commerce, and Housing and Urban Development; and the Administrator of the Small Business Administration should work together to identify opportunities to enhance collaboration among programs, both within and across agencies. The Secretaries of the Departments of Agriculture, Commerce, and Housing and Urban Development and the Administrator of the Small Business Administration should consistently collect information that would enable them to track the specific type of assistance programs provide and the entrepreneurs they serve and use this information to help administer their programs. The Secretaries of the Departments of Agriculture, Commerce, and Housing and Urban Development and the Administrator of the Small Business Administration should conduct more program evaluations to better understand why programs have not met performance goals and the programs’ overall effectiveness. The agencies, together with the administration, have taken some steps to address our recommendations. For example, the administration has initiated steps that provide the agencies with a mechanism to work together to identify opportunities to enhance collaboration among programs. In particular, it introduced a Cross-Agency Priority goal to increase services to entrepreneurs and small businesses in the President’s fiscal year 2013 budget submission. One of the objectives under this goal is to use programs and resources across the federal government to improve and expand the reach of training, counseling, and mentoring services to entrepreneurs and small business owners. In 2012, the administration established an interagency group (including Commerce, SBA, USDA, and others) that aims to streamline existing programs, improve cooperation among and within agencies, ease entrepreneurs’ access to the programs, and increase data-based evaluation of program performance. According to the third quarter fiscal year 2013 status update on the administration’s Cross-Agency Priority goal for small business and entrepreneurship, the working group was to create an interagency evaluation framework in the fourth quarter of fiscal year 2013 to measure the impacts of coordinating funding streams through cluster initiatives. It will be important for the interagency group to follow through on developing an evaluation framework, including metrics, to ensure that the programs are delivering assistance to entrepreneurs efficiently and effectively. In addition, in November 2013, OMB noted that an interagency group meets monthly to discuss individual agency efforts and identify key areas for improved interagency coordination for the BusinessUSA website. It will be important for the interagency group to follow through on any key areas identified to improve coordination among agencies. In addition, the four agencies have completed actions or have actions underway that are intended to improve data collected on program performance. In November 2013, USDA noted that the department’s Rural Business Services completed three initiatives in fiscal year 2013 to improve the quality of performance measurement, including a project to improve the integrity of data the agency uses to compile program performance measures. In November 2013, HUD noted that the department had undertaken a series of actions to improve the quality of data on the department’s Community Development Block Grant (CDBG) funded activities, including economic development activities. HUD’s efforts include an extensive clean-up of CDBG data, which the department expects to complete by the end of the second quarter of fiscal year 2014. In February 2013, SBA noted that the agency had undertaken a modernization project for its resource partner data collection system to enhance current data fields, improve budget and performance integration capabilities, and expand reporting capabilities. In October 2012, Commerce’s Economic Development Administration (EDA) noted that it had recently partnered with two universities to develop a comprehensive set of performance measures that can be used to evaluate the effectiveness of its programs. Going forward, we will continue to obtain updates on the agencies’ progress. We will report on the actions taken by the agencies as we do for other areas included in our mandated work addressing federal programs with fragmentation, overlap, and duplication. We look forward to continuing to work with the agencies as well as this and other congressional committees in addressing ways to assist entrepreneurs in the most effective and efficient manner. Chairman Tipton and Ranking Member Murphy, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact me at (202) 512- 8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Marshall Hamlett, Assistant Director; Catherine Gelb; John McGrail; and Jennifer Schwartz. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Economic development programs that effectively provide assistance to entrepreneurs may help businesses develop and expand. In August 2012, GAO reported information on 52 programs at Commerce, HUD, SBA, and USDA that provided $2.0 billion in support to entrepreneurs in fiscal year 2011 ( GAO-12-819 ). Of these 52 programs, 30 programs distributed across the four agencies can provide financial assistance in the form of grants and loans. Inefficiencies in the administration of these programs could compromise the government's ability to effectively provide services and meet the shared goals of the programs. This testimony discusses (1) the extent of overlap, fragmentation, and duplication among these programs and the extent to which programs collaborate and (2) the extent to which agencies collect information necessary to track program activities and whether these programs have met their performance goals and have been evaluated. This testimony is based on GAO's August 2012 report and provides information on the agencies' actions to address recommendations GAO made in that report. Federal programs GAO reviewed that offer financial support to entrepreneurs, such as grants and loans, are fragmented and overlap based on the type of support they are authorized to offer and the type of entrepreneur they are authorized to serve. The Departments of Commerce (Commerce), Housing and Urban Development (HUD), and Agriculture (USDA); the Small Business Administration (SBA); and the Office of Management and Budget (OMB) have taken steps to collaborate more in administering these programs in response to a recommendation in GAO's August 2012 report. For example, OMB has established a Cross-Agency Priority goal for entrepreneurship and small business and an associated interagency working group. However, the four agencies have not implemented a number of good collaborative practices GAO has identified, such as establishing compatible policies and procedures to better support rural businesses. The Government Performance and Results Act Modernization Act of 2010 (GPRAMA) crosscutting framework requires that agencies collaborate in order to address issues such as economic development that transcend more than one agency, and GPRAMA directs agencies to describe how they are working with each other to achieve their program goals. Some entrepreneurs struggle to navigate the fragmented programs that provide technical assistance in the form of training and counseling. This difficulty can in turn affect referrals to other programs, including financial assistance programs. For example, some entrepreneurs and technical assistance providers GAO spoke with said the system can be confusing and that some entrepreneurs do not know where to go for technical assistance. Collaboration could reduce some negative effects of overlap and fragmentation, but field staff GAO spoke with did not consistently collaborate to provide training and counseling services to entrepreneurs. Without enhanced collaboration and coordination, agencies may not be able to use limited federal resources in the most effective and efficient manner and entrepreneurs may struggle to navigate these fragmented programs. While the four agencies collect at least some information on entrepreneurial assistance program activities, they do not track such information for many programs, a practice that is not consistent with government standards for internal controls. They typically do not track detailed information on the characteristics of entrepreneurs that they serve, such as whether they are located in rural or economically distressed areas or the entrepreneurs' type of industry. In addition, GAO found that from 2000 through 2012, the four agencies conducted program evaluations of 13 of the 30 financial assistance programs reviewed. GPRAMA requires agencies to set and measure annual performance goals and recognizes the value of program evaluations because they can help agencies assess programs' effectiveness and improve program performance. Without more robust program information, agencies may not be able to administer programs in the most effective and efficient manner, and scarce resources may be going toward programs that are less effective. In August 2012, GAO recommended that the four agencies and OMB explore opportunities to enhance collaboration among programs and that the four agencies track program information and conduct more program evaluations. The agencies neither agreed nor disagreed with the recommendations but did provide information on their plans to address them.
|
The International Space Station program began in 1993 with several partner countries: Canada, the 11 member nations of the European Space Agency (ESA), Japan, and Russia. The ISS has served and is intended to expand its service as a laboratory for exploring basic questions in a variety of fields, including commercial, scientific, and engineering research. The first assembly flight of the station, in which the space shuttle Endeavor attached the U.S. laboratory module to the Russian laboratory module, occurred in early December of 1998. However, since the program’s inception, NASA has struggled with cost growth, schedule delays and re- designs of the station. As we reported in the past, these challenges were largely due to poorly defined requirements, changes in program content and inadequate program oversight. Due to these challenges, the configuration of the station has devolved over time. In the spring of 2001, NASA announced that it would make major changes in the final configuration of the ISS to address cost overruns. In 2003, the National Academies reported that this reconfiguration greatly affected the overall ability of the ISS to support science. NASA estimates that assembly and operating costs of the ISS will be between $2.1 billion to $2.4 billion annually for FY2009-FY2012. The ISS as of February 19, 2008, is approximately 65 percent complete. The shuttle program and the ISS program are inherently intertwined. The shuttle has unique capabilities in that it can lift and return more cargo to and from orbit than any other current or planned space vehicle. Figure 1 shows the capabilities of the shuttle in various configurations. Most segments of ISS cannot be delivered by any other vehicle. For example, the Columbia disaster in 2003 put ISS assembly on hiatus as NASA ceased shuttle launches for 2½ years while it investigated the safety of the fleet. During this period, the Russian Soyuz became the means of transportation for crewmembers traveling to and returning from the ISS. In a major space policy address on January 14, 2004, President Bush announced his “Vision for U.S. Space Exploration” (Vision) and directed NASA to focus its future human space exploration activities on a return to the Moon as prelude to future human missions to Mars and beyond. As part of the Vision, NASA is developing new crew and cargo vehicles, with the first crew vehicle currently scheduled to be available in 2015. The President also directed NASA to retire the space shuttle after completion of the ISS, which is planned for the end of the decade. Based on that directive, NASA officials told us that they developed a manifest consisting of 17 shuttle launches to support ISS assembly and supply between 2005 and 2010. Nine of these have taken place. In response to the President’s Vision, NASA formally set September 30, 2010, as the date that the shuttle program will cease because agency officials believe that continuing the program beyond that date will slow development of the agency’s new vehicles—specifically, the agency budget cannot support both programs at costs of $2.5 billion to $4 billion above current budget. As shown in Table 1, the shuttle program costs NASA several billion dollars annually and projected funding is phased out in fiscal year 2011. NASA officials stated that the majority of shuttle program cost is fixed at roughly $3 billion a year whether it flies or not. NASA officials stated that the average cost per flight is $150 million to $200 million. The 2005 NASA Authorization Act designated the U.S segment of the ISS as a national laboratory and directed NASA to develop a plan to increase the utilization of the ISS by other federal entities and the private sector. In response, NASA has been pursuing relationships with these entities. NASA expects that as the nation’s newest national laboratory, the ISS will strengthen NASA’s relationships with other federal entities and private sector leaders in the pursuit of national priorities for the advancement of science, technology, engineering, and mathematics. The ISS National Laboratory is also intended to open new paths for the exploration and economic development of space. It will be a challenge for NASA to complete the space station by 2010 given the compressed nature of the schedule, maintenance and safety concerns, as well as events beyond its control such as weather. Any of these factors can cause delays that may require NASA to re-evaluate and reconstitute the assembly sequence. NASA remains confident that the current manifest can be accomplished within the given time and there are tradeoffs NASA can make in terms of what it can take up to support and sustain the station should unanticipated delays occur. However, failure to complete assembly as currently planned would further reduce the station’s ability to fulfill its research objectives and short the station of critical spare parts that only the shuttle can currently deliver. In our July 2007 testimony, we reported that NASA planned to launch a shuttle once every 2.7 months. The plan for launches remains aggressive, partly because NASA plans on completing the ISS with the last assembly mission in April 2010, with two contingency flights in February and July 2010 to deliver key replacement units. The 5 months between the last assembly launch and shuttle retirement in September 2010 act as a schedule reserve, which can be used to address delays. There are 8 shuttle flights left to complete the station and two contingency flights left to deliver key components necessary to sustain the ISS after the retirement of the shuttle. There is an average of 2 ½ months between each shuttle launch. Table 2 shows the current shuttle manifest. NASA has launched shuttles at this rate in the past. In fact, the agency launched a shuttle, on average, every two months from 1992 through the Columbia disaster in 2003. However, at that time the agency was launching a fleet of four shuttles. The shuttles require maintenance and refurbishing that can last four to five months before they can be re-launched. Launching at such a rate means that the rotation schedule can handle few significant delays, such as those previously experienced due to weather and fuel sensor difficulties. Lastly, NASA officials said that Shuttle Atlantis, which was to go out of service after the Hubble mission, will return to servicing the ISS for two more flights, which NASA believes will add more schedule flexibility. NASA officials stated repeatedly that NASA is committed to safely flying the shuttle until its retirement and will not succumb to schedule pressure. However, the compressed nature of the manifest will continue to test that commitment. Fuel sensor challenges continue to surface in the shuttle fleet. For example, the recent shuttle Atlantis launch was delayed two months while NASA addressed a fuel sensor problem associated with the shuttle’s liquid hydrogen tank. This is the same system that caused a 2- week delay in the launch of the shuttle Discovery in 2005. There are also challenges associated with the shuttle launch window. NASA officials told us that the duration of that window is dependent on a number of factors, which include changes in the position of the earth and spacecraft traffic restrictions. NASA must consider its traffic model constraints for vehicles docking at the space station. According to the traffic model for ISS, no other vehicle can dock while the shuttle is docked, and each vehicle has constraints on how long it can stay docked. For example, the shuttle can dock for a maximum of 10 days, while the Soyuz can dock a maximum of 200 days. The docking of these two vehicles must be coordinated and meet other technical restrictions. In addition, the shuttle has experienced delays due to severe weather, such as when Atlantis’s external tank was damaged by a hailstorm in 2007. In this case the delay was about three months. Figure 3 shows the delays in recent shuttle launches related to weather and other causes. The ISS is scheduled to support a six-person crew as early as 2009 and maintain that capability through 2016. NASA officials said that equipment essential to support a six-person crew, such as systems for oxygen recycling, removal of carbon dioxide and transforming urine into water as well as an exercise machine will be delivered to the station this fall. In addition, there are two components that have been planned to hold this and other equipment needed for the six-person crew, which are scheduled to go up in April 2010. If unanticipated delays occur, NASA may need to hold back these two components—known as the Node 3 and the Cupola— which could constrain the ability to conduct research and the quality of life on the station for the crew. NASA officials emphasized that NASA’s intent was to have most science conducted on ISS only after the assembly of the ISS was completed. The ISS currently supports three crewmembers. NASA stated that the majority of the crew’s time is spent maintaining the station, rather than conducting scientific study. According to NASA, the crew spends no more than 3 hours per week on science. Completion of the ISS would allow NASA to expand to a six-person crew who could conduct more research. Since the ISS is designated as a national laboratory, the expectation is that it will support scientific experimentation. NASA is in the process of negotiating agreements with scientific organizations to support scientific research on the ISS. NASA officials told us that they are negotiating a Memorandum of Understanding with the National Institutes of Health to explore the possibility of scientific experimentation onboard the ISS. These officials also told us that NASA is in the process of negotiating with at least two other agencies. NASA’s efforts to complete the ISS are further complicated by the need to put replacement units—the spare parts that are essential to sustaining the ISS—into position before the shuttle retires. The two contingency flights of the shuttle have been designated to deliver these key replacement units, which only the shuttle is capable of carrying. According to NASA, the original approach to deal with these key components (also known as orbital replacement units—ORU) was to take the ones that failed or reached the end of their lifetime back to Earth on the shuttle, refurbish them and launch them back to ISS for use. As a result of the shuttle retirement, NASA will no longer be bringing down ORUs to fix. Instead, NASA officials stated they have adopted a “build and burn” philosophy, which means that after the shuttle retires, instead of being brought down to be refurbished, ORUs will be discarded and disintegrate upon re-entry into the atmosphere. To determine how many replacement units need to be positioned at the station, NASA officials told us they are using data modeling that has been very effective in determining how long ORUs will last. Table 3 illustrates the shuttle manifest. This includes elements needed for the planned configuration to complete the station and delivery of critical spares. NASA currently plans to use two contingency flights for these replacements because all other flights are planned with assembly cargo. Recently, the NASA Administrator publicly stated that these flights are considered necessary to sustain the ISS and have been scheduled to carry key spare units. In the event that NASA completes assembly of the ISS on schedule and prepositions an adequate number of critical spares, the agency still faces a myriad of challenges in sustaining the research facility until its retirement, currently planned for fiscal year 2016. Without the shuttle, NASA officials told us that they face a significant cargo supply shortfall and very limited crew rotation capabilities. NASA will rely on an assortment of vehicles in order to provide the necessary logistical support and crew rotation capabilities required by the station. Some of these vehicles are already supporting the station. Others are being developed by international partners, the commercial sector, and NASA. (See Figure 4) Furthermore, some of these transportation services may face legal restrictions, and still others face cost, schedule, and performance issues that raise serious questions about their development and utilization. These issues will challenge NASA’s ability to close the sustainment gap between the retirement of the shuttle in 2010 and the availability of the Crew Exploration Vehicle (CEV) in 2015. Failure of any or some of these efforts would also seriously restrict NASA’s options to sustain and maintain a viable space station. With the exception of the Shuttle and the recently completed demonstration flight of the ATV, the only vehicles currently capable of supporting the space station are the Russian Progress and Soyuz vehicles. NASA officials stated that both of these vehicles have provided reliable service to the ISS. From the Columbia disaster in 2003 until return to flight in 2005, the Russian vehicles were the sole source of logistical support and crew rotation capability for the station. The Progress provides atmospheric gas, propellant, water, and pressurized cargo. It also has the capability to use its thrusters to change the Station’s altitude and orientation. The Soyuz provides crew delivery and rescue capability for three crew members. Progress vehicles are expendable and offer no recoverable return capability, but provide important trash removal capabilities. Soyuz vehicles have a limited recoverable cargo capacity. However, some NASA officials have suggested that their limited capabilities restrict the capacity of the station to move to a six-member crew and significantly limit the scientific research because the vehicles cannot bring experiments to earth for assessment. NASA currently purchases crew and cargo transport services from Russia through a contract with the Russian Federal Space Agency (Roscosmos). NASA officials told us that after the initial ISS contract between Roscosmos and NASA expired, NASA entered into another contract that runs through 2011. However, according to NASA, the Iran Nonproliferation Act of 2000 restricted certain payments in connection to the ISS that may be made to the Russian government. In 2005, NASA requested relief from the restrictions of the Act, and Congress amended the Act. Through this amendment, NASA and Roscosmos have negotiated quantities and prices for services through January 1, 2012. NASA officials anticipate the use of 4 Soyuz flights per year and approximately 6 Progress flights beginning in approximately 2010. While NASA officials stated that they are making every effort to limit amount of fees they pay for usage of Russian vehicles, to date, NASA officials told us that they anticipate that from fiscal year 2009 to fiscal year 2012, NASA will spend $589 million on cargo and crew services from the Russians. NASA officials also told us that the Roscosmos has suggested that it will charge NASA higher fees for usage of its vehicles. NASA has stated it will use its international partners’ vehicles to conduct some supply activities. Specifically, Japan’s Aerospace Exploration Agency (JAXA) H-II Transfer Vehicle (HTV) and the European Space Agency’s (ESA) Automated Transfer Vehicle (ATV) vehicles will be used for bringing up cargo. NASA’s reliance on the ATV and HTV assumes that these vehicles will be ready to service the ISS by the time the shuttle stops flying in 2010. The new vehicles being developed by the European and Japanese space agencies are very complex. The ATV had a development timeline of 20 years. Its first operational test flight to the ISS was in March 2008. NASA has stated that both the European and Japanese vehicle development programs experienced technical hurdles and budgetary constraints, but are committed to fulfilling their roles as partners in the ISS program. NASA officials told us they have confidence the European vehicle will be available for ISS operations before retirement of the shuttle, but they are not as confident about the Japanese vehicle’s being ready by that time. The Japanese vehicle is still under development and has faced some setbacks. NASA officials told us that the HTV’s first test launch is planned for July 2009. In addition to potential development challenges, the international partner vehicles have constraints in terms of what they can take to and from the ISS in comparison to the shuttle. NASA’s current plans to manage the gap after the shuttle retirement do not take into account the possibility of delays in the development of these vehicles, and even if they do come on line on time, NASA officials estimate that there will be a significant shortfall to the ISS of at least 114,199 pounds (or 51.8 metric tons) in cargo re-supply capability. These vehicles were designed to augment the capabilities of the shuttle and have significantly less capability to deliver cargo to the ISS. The shuttle can carry a maximum cargo of close to 38,000 pounds (17,175 kg.). In comparison, the European ATV’s maximum capability is 16,535 pounds (7,500 kg.) and the Japanese HTV’s average capability is 13,228 pounds (6,000 kg.). The HTV and ATV are expendable vehicles. NASA can use them for trash removal, but cannot carry cargo or scientific experiments back to earth because the vehicles disintegrate when re-entering the atmosphere. The Russian Progress and Soyuz vehicles also have very limited cargo capacity. For example, the Progress has an average capability of 5,732 pounds (2,600 kg.)—roughly one-seventh the shuttle’s capability. The Progress, like the ATV and HTV, is an expendable vehicle. The Soyuz can transport three crew persons to the ISS and can serve as a rescue vehicle capable of taking three crew members back to earth. Unlike the ATV and HTV, the Soyuz does have the capacity to bring down cargo—roughly 132 pounds (60 kg.). NASA officials have stated that until NASA deploys its new crew exploration vehicles or commercial vehicles become available, NASA will be dependent on the Russian vehicles for crew transportation services and on the Japanese and European vehicles for limited cargo services whenever they become available. Figure 7 compares the up mass capabilities of the various vehicles. NASA is working with the commercial space sector through its Commercial Orbital Transportation Services (COTS) program to develop and produce vehicles that can take equipment and crew to and from the space station. NASA expects that these vehicles will be ready for cargo use in 2010 and crew use in 2012. However, these vehicles have yet to be successfully launched into orbit, and some NASA officials have acknowledged that their development schedules leave little room for the unexpected. Under the COTS program, NASA has pledged $500 million to promote commercial opportunities for space transportation vehicles. Using Space Act agreements instead of traditional contracting mechanisms, NASA will make payments to companies based on the achievement of key milestones during the development of their vehicles. These agreements are both funded and unfunded. For the two funded agreements that have been reached, NASA stated that the commercial suppliers for space transportation services will have customers outside of ISS, including NASA’s Constellation program, which plans to send humans back to the moon and eventually Mars. The COTS program will occur in phases. In the first phase companies will demonstrate the vehicle launch and docking capabilities with the ISS. The second phase is the procurement of services for transportation of cargo and crew to the ISS, which is scheduled to begin sometime in the 2010 timeframe. NASA had seven COTS agreements through the Space Act. NASA signed five unfunded Space Act agreements, which facilitate the sharing of technical and ISS integration information between commercial companies and NASA. NASA has funded two companies, Rocketplane Kistler (RpK) and Space Exploration Technologies (SpaceX). NASA officials stated that through the funded Space Act agreements, SpaceX has received $139 million for its project and is still working on successfully launching a vehicle that can reach low-Earth orbit. The company successfully completed a critical design review in August 2007 and told us that it is planning its first orbital demonstration test flight for June 2009. NASA officials told us that RpK received $37 million in funding, but then forfeited the remainder of its share because it did not meet certain financial development milestones. When NASA began to redistribute these forfeited funds, RpK filed a bid protest with GAO, which GAO denied. NASA officials then moved forward and awarded $170 million to Orbital Sciences Corporation in February 2008. NASA officials acted quickly to award the forfeited money and expect that SpaceX will have cargo capability available in 2010 (by the time the shuttle is retired) and crew capability in 2012. While Space X has been meeting key milestones in the development of its vehicle, some officials at the Johnson Space Center were skeptical that COTS would be available on the current projected schedule. Additionally, the International Space Station Independent Safety Task Force (IISTF) reported that design, development and certification of the new COTS program was just beginning and that “if similar to other new program development activities, it most likely will take much longer than expected and will cost more than anticipated.” In our opinion, the schedule is optimistic when compared to other government and commercial space programs we have studied. We will be studying the COTS program and schedules in more detail in response to a request of members of congress. NASA is under pressure to develop its own vehicles quickly as the space shuttle’s retirement in 2010 means that there could be at least a 5-year gap in our nation’s ability to send humans to space. Among the first major items of NASA’s development efforts to implement the Vision program are the development of new space flight systems—including the Ares I Crew Launch Vehicle and the Orion Crew Exploration Vehicle. Ares I and Orion are currently targeted for operation no later than 2015. NASA plans to use these vehicles as they become available to service the space station. However, we recently testified that there are considerable unknowns as to whether NASA’s plans for the Ares I and Orion vehicles can be executed within schedule goals, as well as what these efforts will ultimately cost. This is primarily because NASA is still in the process of defining many of the project’s performance requirements and some of these uncertainties could affect the mass, loads, and weight requirements for the vehicles. Such uncertainty has created knowledge gaps that are affecting many aspects of both projects. For example, a design analysis cycle completed in May 2007 revealed an unexpected increase in ascent loads (the physical strain on the spacecraft during launch) that could result in increases to the weight of the Orion vehicle and both stages of the Ares I. NASA recognizes the risks involved with its approach and it is taking steps to mitigate those risks. However, given the complexity of the Orion and Ares I efforts and their interdependencies, any significant requirements changes can have reverberating effects and make it extremely difficult to establish firm cost estimates and schedule baselines. If knowledge gaps persist, programs will cost more, fail to meet their schedules, or deliver less than originally envisioned. Ultimately, NASA’s aggressive schedule leaves little room for the unexpected. If something goes wrong with the development of the Crew Launch Vehicle or the Crew Exploration Vehicle, the entire Constellation Program could be thrown off course and the return to human spaceflight further delayed. The decision to retire the space shuttle in 2010 has had profound effects on the ISS program. It leaves little flexibility in the shuttle schedule. Any delays could require NASA to choose between completing the station as planned and the pre-positioning of needed critical spares. The decision also leaves NASA dependent on Russia for crew rotation services until other vehicles are developed and demonstrated. And even with the development of these vehicles, NASA still faces a significant capacity shortfall in its ability to provide logistical support to the station. The shortfall may well impact support for a six person crew and the quality of research that can be conducted on the ISS. At the same time, it also provides opportunities to commercial suppliers to demonstrate capabilities that could have long-term benefits for future U.S. space exploration and development. We are not making recommendations as a result of our review as NASA is well aware of the predicament it faces with the station and has weighed options and trade-offs for the remainder of the schedule manifest. However, it is important that flexibility continue to be maintained as events impacting schedule occur and that decisions be made with the goal of maximizing safety and results. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or the other members may have at this time. For further questions about this statement, please contact Cristina T. Chaplain at (202) 512-4841. Individuals making key contributions to this statement include James L. Morrison, Greg Campbell, Brendan S. Culley, Masha P. Pastuhov-Purdie, Keo Vongvanith, and Alyssa B. Weir. To identify the risks and challenges NASA faces in completing assembly of the International Space Station by 2010, we analyzed key documents and testimonies by NASA officials relating to the challenges associated with ISS completion. This included: the delivery schedule for ISS parts for assembly and the delivery schedule for replacement units, the space shuttle manifest, budget documents and the strategic maintenance plan, the ISS Independent Safety Task Force Report, and previous GAO reports relating to the ISS. interviewed NASA mission officials to obtain information on the status of the ISS. We also discussed these issues with the International Partners (Canadian Space Agency, European Space Agency and Japan Aerospace Exploration Agency) to get their perspectives. To determine the risks and challenges NASA faces in providing logistics and maintenance support to the International Space Station after 2010, we analyzed documents related to the up-mass and down-mass capabilities of the International Partners and SpaceX vehicles, the shortfall in ISS up- mass for re-supply and sustainment, the new vehicles that will support ISS NASA’s plans for using Russian vehicles to support ISS through what NASA refers to as its “exemption,” and the impacts to the utilization of the ISS. We interviewed key NASA officials from NASA Headquarters, the Space Operations Mission Directorate, NASA’s Commercial Orbital Transportation Services program, and the ISS program officials, and interviewed officials representing the International Partners To accomplish our work, we visited and interviewed officials responsible for the ISS operations at NASA Headquarters, Washington, D.C., and the Johnson Space Center in Houston, Texas. At NASA Headquarters, we met with officials from the Exploration Systems Mission Directorate and the Space Operations Mission Directorate, including representatives from the International Space Station and space shuttle programs. We also met with ISS and space shuttle mission officials at the Johnson Space Center. We conducted this performance audit from July 2007 to April 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The International Space Station (ISS), the most complex scientific space project ever attempted, remains incomplete. NASA expects the station's final construction cost will be $31 billion and expects sustainment costs through the station's planned retirement in fiscal year 2016 to total $11 billion. The space shuttle, the only vehicle capable of transporting large segments of the station into orbit, is critical to its completion. NASA plans to complete ISS assembly and retire the shuttle in 2010 in order to pursue a new generation of space flight vehicles, which will not begin to be available until 2015. To provide crew rotation and logistical support during this 5-year gap, NASA plans to rely on spacecraft developed by the commercial sector and other countries. In light of these circumstances, GAO examined the risks and challenges NASA faces in (1) completing assembly of the ISS by 2010 and (2) providing logistics and maintenance to the ISS after 2010. GAO's work to accomplish this included reviewing budget, planning, and other documents from NASA; reviewing NASA officials' testimonies; and interviewing NASA and foreign space program officials. NASA faces significant challenges in its plans to complete assembly of the International Space Station (ISS) prior to the scheduled retirement of the space shuttle in 2010. Since GAO testified on this issue in July 2007, the shuttle flight schedule has remained aggressive--slating the same number of launches in a shorter period. While NASA thinks the proposed schedule is still achievable, the schedule (1) is only slightly less demanding than it was prior to the Columbia disaster when the agency launched a shuttle every other month with a larger shuttle fleet and (2) leaves little room for the kinds of weather-related, technical, and logistical problems that have delayed flights in the past. Unanticipated delays could result in changes to the station's configuration, that is, some components may not be delivered. We have previously testified that such changes could limit the extent of scientific research that can be conducted on board the ISS. After assembly is completed and the shuttle retires, NASA's ability to rotate crew and supply the ISS will be impaired because of the absence of a vehicle capable of carrying the 114,199 pounds of additional supplies and spares needed to sustain the station until its planned retirement in 2016. For crew rotation and logistics, NASA plans to rely on Russian, European and Japanese vehicles--these vehicles were designed to augment the capabilities of the shuttle, not replace them, and have far less capacity to haul cargo. Furthermore, aside from a single Russian vehicle that can bring back 132 pounds of cargo, no vehicle can return cargo from the ISS after the shuttle is retired; and Commercially developed vehicles--NASA has pledged approximately $500 million for the development of commercial vehicles. NASA expects these vehicles will be ready for cargo use in 2010 and crew use in 2012, even though none of the vehicles currently under development has been launched into orbit yet and their aggressive development schedule leaves little room for the unexpected. If one of these vehicles cannot be delivered according to NASA's current expectations, NASA will have to rely on Russian vehicles to maintain U.S. crew presence on the ISS until the new generation of U.S. spacecraft becomes available.
|
Spare parts are defined as repair parts and components, including kits, assemblies, and subassemblies required for the maintenance of all equipment. Repair parts and components can include repairable parts, which are returned to the supply system to be fixed when they are no longer in working condition, and consumable parts, which cannot be repaired cost-effectively. The Navy owns and operates about 4,000 aircraft. These aircraft contain about 70,000 repairable spare parts, such as landing gear, navigational computers, and hydraulic pumps. These spare parts, in turn, consist of thousands of individual parts or items. When any of these spare parts or individual items fails to perform properly, or reaches the end of its service life, it must be replaced with a repaired or newly purchased part. This maintenance work takes place at government repair facilities and commercial contractor facilities across the country. Providing logistics support for these aircraft is the responsibility of the Naval Air Systems Command and the Naval Supply Systems Command. Overall Navy logistics policies and procedures are the responsibility of the Deputy Chief of Naval Operations (Logistics). The Navy’s repairable spare parts are managed under the Navy Working Capital Fund. This is a revolving fund that relies on revenues generated from the sale of parts and services to customers, which are then used to finance subsequent operations. The fund is expected to generate sufficient revenues to cover the full cost of operations and to break even over time— that is, not to have a gain or a loss. Customers order parts from the Navy’s supply system and pay the working capital fund from their budgets. Each fiscal year, the Navy establishes the prices for spare parts, setting them to correspond with the customers’ aggregate budgeted amounts. This concept, in theory, ensures that customers have, in the aggregate, sufficient funds budgeted to purchase their anticipated requirements of spare parts. The process of setting prices for spare parts begins 2 years before the fiscal year in which the prices take effect and involves customers, a number of Navy entities, and the Office of the Under Secretary of Defense (Comptroller). During this process, the customer price is set on the basis of projected customer requirements, as well as anticipated repair costs and management overhead fees. Figure 1 shows the major elements that are considered in developing the customer price for Navy spare parts. In our recent review of prices for a selected group of spare parts for three Navy aircraft and their engines that we examined in the November 2000 report, we found that prices continued to rise. Our analysis suggested that the major factor driving these increases was the cost of the materials used to repair spare parts, while other factors, such as higher overhead fees and growing labor costs, also contributed. However, because of the lack of relevant information in the Navy’s maintenance and repair databases, we were unable to determine what the underlying reasons were for the increases and as a result, what management action might be appropriate to reduce or stabilize the prices. The prices of repairable aviation spare parts continued to increase dramatically. Between fiscal year 1999 and 2002, the total cost of spare parts increased from $1.6 billion to $2.7 billion. Of this total, the repair portion rose from $1.2 billion to $1.8 billion, an increase of 50 percent and represented 6.6 and 8.3 percent, respectively, of the Navy and Marine Corps’ operation and maintenance funds that are used to sustain the readiness of the operating forces. Our analysis of 453 selected spare parts showed that the prices paid by customers increased an average of 37 percent between fiscal year 1999 and 2002 (see app. II). We looked at these because they were the most costly repair parts from three aircraft (the H-53 helicopter, the F/A-18 Hornet fighter and attack aircraft, and the AV-8B Harrier attack aircraft) and their engines. We found that the prices for 195 of the 453 parts dropped an average of almost 35 percent (see app. III) due to reductions in both repair costs and overhead fees. The prices for the remaining 258 parts, however, spiraled dramatically—an average of 91.5 percent during the 3-year period (see app. IV). The price hikes for 233 of the 258 spare parts (90 percent) were primarily due to higher repair costs, while those for the remaining 25 (10 percent) were due to higher management overhead fees. We selected 31 spare parts from the total population of 453 to identify the factors driving increases in repair costs. These parts were all repaired at government depots. As table 1 shows, the average increases in total repair costs for these 31 parts varied widely—from a modest 8 percent for the F/A-18 Hornet aircraft to more than 200 percent for two engine systems (F-402 and T-64). A closer look at the repair data indicated that the largest increases were generally attributable to the higher costs of the materials used to repair the spare parts, while a smaller increase resulted from higher labor costs. For example, one of the parts, a rotor compressor for the F-402 engine, increased over 86 percent in price from $48,890 in fiscal year 1999 to $91,060 in fiscal year 2002. The material portion of the costs for repair had increased from $16,386 to $57,727 (over 252 percent), while labor had decreased from $10,739 to $9,092 (approximately 15 percent) and overhead had increased less than 12 percent from $21,765 to $24,241. (See app. V for detailed repair cost data for each part.) Figure 2 shows how the cost components contributed to the price that customers paid for another of these parts, a $45,120 turbine rotor for the F-404 engine in fiscal year 2002. It shows that a significant portion ($30,893, or 69 percent) of the price stemmed from the cost of the materials used to fix the rotor. A recent Naval Air Systems Command study underscored that rising material costs used in repairing spare parts are a contributing factor to price increases. The study compared repair costs in its maintenance facilities for the first quarter of fiscal year 1997 with those for the first quarter of fiscal year 2000. It concluded that while the average annual repair costs for more than 26,000 parts increased by 5 percent, the cost of materials rose by 8 percent; in contrast, labor costs rose less than 1 percent. Furthermore, the study showed that in the case of 105 high- demand parts material costs jumped by about 16 percent while labor costs increased by 3 percent. We found a similar link between higher material costs and repairable spare parts price increases. Our examination of the aggregate prices of individual repair items used in the 31 spare parts indicated that three factors may have contributed to higher material costs for 25 of these parts: (1) higher prices for individual repair parts used, (2) the use of more parts in the repair process, and (3) changes in the mix of repair parts used. Another possible factor identified through discussions with Navy officials was that some repairs used new, more expensive repair parts. However, the Navy’s data systems did not provide sufficient information on each repair event to allow us to determine why the prices increased for each spare part. For example, we could discern that more material had been used in a repair, but we could not determine why this had happened: Had maintenance procedures changed? Was the repairable part in unusually poor condition? Had there been extensive cannibalization of the part’s components? Or were there other reasons? Without more specific information on each spare part or repair event, management would not be able to determine—or address—the reasons for rising repair costs. As noted above, our ability to determine the reasons for rising spare part costs was impaired because the Navy lacked an effective data system to collect and analyze information relevant to material costs and usage. The current data system tracks repair costs for groups of spare parts but not for individual parts. The costs are accumulated for the group, divided by the number of spare parts in the group and analyzed as an average cost per item in the group. As a result, government repair facilities cannot determine the cause of significant increases in repair costs for an individual spare part. For example, the average reported material cost for individual repair parts needed to repair compressors for the F-402 engine increased from $14,269 in fiscal year 1998 to $65,494 in fiscal year 2000. While the detailed requisition data identifies what materials were ordered, it is impossible to determine—when more than one repair is associated with the requisition—how much of the material was used in a specific repair. Consequently, the fact that more material is being used on multiple repairs can be discerned, but not the reason for the increased usage. In addition, there is no indication of whether the differences in materials ordered are due to the repair of one part or to the group as a whole. The Navy has made little progress in identifying the underlying causes of spare parts price increases. While it has various initiatives aimed at reducing overall costs, it does not have a planned set of actions to identify the underlying causes of price increases. The Navy has only partially implemented a recommendation we made in our November 2000 report to identify and implement solutions to reduce and stabilize prices. It has undertaken several initiatives to control repair costs, but these have centered on enhancing the reliability and maintenance process, which could help stabilize prices for repairable parts. However, they do not deal with the underlying reasons for cost increases. One new initiative, which will allow the Navy to track individual spare part items by their serial numbers, may provide the tool it needs to effectively monitor and control its spare part prices. Also, the Navy might learn from DLA’s efforts to address price increases for consumable spare parts. Of three recommendations we made in our November 2000 report, the first one, which was directly concerned with investigating why prices were rising, has been only partially implemented. This one recommended that the Secretary of Defense ensure that the Navy follow through on the results of its planned studies by identifying and implementing solutions to reduce and stabilize prices. See appendix VI for a discussion of the other two recommendations. To start addressing this recommendation, the Navy has undertaken some cost-controlling initiatives aimed at improving the reliability of spare parts and is implementing a serial number tracking program to improve inventory management. However, to date, the initiatives have not focused on identifying the reasons for price increases. The Navy’s recent initiatives and studies (by contractors, headquarters, and repair depots) center on improving the reliability of its aviation spare parts in order to control its flying hour costs. Conceptually, if the reliability of parts used in the Navy’s aviation systems is improved, then the demand for those parts will fall since they will not be replaced as often, and the cost to the flying hour program will be reduced. While this approach has merit, it focuses only on the demand side of the total flying hour program cost equation. As a result, significant price increases or decreases can occur without management being aware of the underlying causes. An April 2001 study by the Center for Naval Analyses showed that the cost of repairable parts continued to climb, even though the number of Navy flight hours recorded decreased. In examining why the cost per flying hour increased from fiscal year 1992 to fiscal year 1999, the study concluded that the main reasons were a decline in the number of hours flown and the increased age of Navy aircraft. The study also found that price increases for spare parts, overhead costs, quantity of materials ordered, and the mix of spare parts ordered also contributed significantly to higher flying hour costs. Price increases were identified as a significant factor that should be studied further. A Navy logistics official told us the service has used the study to justify a potential 2 percent budget increase for repairable spare parts starting in fiscal year 2000. The Navy has recently undertaken a number of initiatives, such as the Logistics Engineering Change Proposals program, that are designed to control the costs of individual spare parts by improving their reliability. These efforts focus on improving the reliability of repairable parts, and thereby reducing demand while reducing or eliminating support costs. Repairable parts are selected for the study on the basis of their high historical costs and low reliability. Then the proposals are evaluated to determine whether a change in the part would be justified based on the anticipated investment return equal to two times the cost within 5 years. While these efforts have resulted in some significant reported cost savings, they have been geared toward increasing the reliability of parts, thereby reducing the total costs of these parts. Other ongoing initiatives are directed at streamlining the maintenance operations at government repair facilities, and thus potentially lowering the overhead costs that are charged to repairs. The Business Process Reengineering effort, which began in fiscal year 1999, focuses on the repair and modification process at the government repair facilities. Through this effort, the Navy expects to reduce its acquisition costs and overhead charges by adopting new acquisition methods, such as prime vendors, direct vendor deliveries, and electronic commerce. It also expects to reduce its labor costs by automating the requisition process, outsourcing material handling functions, and improving the workload forecasting process. It plans to achieve additional savings from its component repair segment in the form of increased part reliability. Another related initiative is the Manufacturing Resource Planning effort, scheduled for completion by the end of fiscal year 2002. This initiative is designed to cut costs by reducing inventories and shortening lead times on parts requisitions at government repair facilities. It will do this by developing a more efficient and effective process for forecasting the demand for repair parts and more closely aligning this demand with ordering parts with anticipated workloads. One promising initiative—a serial number tracking system for the Navy’s inventory of parts—has the potential for identifying the underlying reasons for price changes. This effort was initiated by the Naval Aviation Maintenance Supply Readiness group, which recognized that the Navy needed to acquire comprehensive information on its entire inventory in order to reduce its overall costs. As a result, in November 1998 it tasked the Naval Supply Systems Command to begin developing a serial number tracking system designed to (1) reduce total inventory ownership costs, (2) reduce secondary inventory levels, and (3) enhance customer satisfaction. This tracking system is designed to collect data on individual parts throughout the Navy’s supply and maintenance systems. The Navy recently completed testing its serial number tracking effort and began installing “smart buttons” (an automatic identification technology) on depot-level repairable parts for the H-53 helicopters. The smart buttons store all of the necessary identification (including part and serial number), mission configuration, repair requirements, and repair history information for that particular part. The Navy plans to install this technology throughout its fleet by fiscal year 2005, at an estimated cost of $58 million appropriated over fiscal years 2002 through 2005. Navy officials believe the tracking system will be helpful in identifying the causes of rising parts costs and decreases in reliability. For example, it could be used to analyze parts usage at maintenance facilities and the effectiveness of maintenance actions. It could also be used to evaluate different maintenance concepts, such as performing complete overhauls versus only repairing parts as necessary. As stated in our April 2002 report, DLA has undertaken a range of efforts to address significant consumable spare parts price increases. It recently completed two price trend analyses, is examining the causes for these increases, and plans to provide detailed explanations and remedies in a report to DOD. In addition, DLA has other efforts underway, including three technology initiatives, aimed at providing better information for determining price reasonableness. As the overall prices of repairable spare parts continue to rise, the Navy is making efforts to control total costs by improving the reliability of spare parts and by reducing its overhead maintenance costs. However, it does not have clear accountability and a planned approach to determine why the prices are changing—increasing or decreasing. Consequently, the Navy lacks the information to identify what management steps it can take to control prices. The deployment of a serial number tracking system, designed to accumulate detailed repair and use information on individual spare parts and their components, represents a vehicle for providing managers with the information they need to identify underlying causes for price increases. In addition, DLA has efforts underway to address underlying causes for price increases. In order to develop the information and action necessary to address the underlying causes for price increases, we recommend that the Secretary of Defense direct the Secretary of the Navy to: Develop an overall plan with implementation milestones, resource requirements, and accountability within the Naval Supply Systems Command to identify the underlying reasons for price increases in aviation spare parts. The plan should include, but not be limited to, using the comprehensive data on individual spare parts from the serial number tracking system now under development, as well as lessons learned from DLA’s efforts to address price increases. Utilize information generated from the plan’s initiatives to develop management strategies, which provide assurance that future prices represent a reasonable cost to the customer. In written comments on a draft of this report, DOD generally agreed with our principle findings and recommendations. The comments focused on the positive steps the Navy has taken to address the rising costs associated with spare parts within the flying hour program. In particular DOD stressed that ongoing initiatives such as Logistics Engineering Change Proposals are implemented to reduce overall costs to the Navy, not hold them steady. This report was adjusted to reflect this point. However, DOD’s response did not address the need to develop an overall plan with accountability to identify the underlying reasons for price increases in aviation spares. We continue to believe these actions are necessary and, as part of our normal follow-up process, in the future will assess the actions taken and make any additional recommendations that we believe are appropriate. The Department’s comments are reprinted in their entirety in appendix VII. We are sending copies of this report to interested congressional committees, the Secretaries of Defense and the Navy; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-8412 if you or your staff have any questions regarding this report. Key contributors to this report were Richard Payne, John Wren, Daniel Omahen, Nancy Rothlisberger, Jason Jackson, John Van Schaik, and Nancy Benco. To identify the key factors contributing to price increases, we performed an analysis of selected repairable spare parts. Specifically, we chose 453 repairable parts used in the F/A-18, AV-8B, and H-53 aircraft and helicopters and their engines and analyzed the pricing and repair cost trends. These three systems and their engines had been identified, in our November 2000 report, as having experienced higher-than-average price increases. The 453 were the most costly parts, in terms of the amounts that Navy customers spent (the unit price multiplied by the quantity sold), based on the most recent data available at the time of our review. Our review of the 453 parts showed that prices increased primarily because of higher repair costs. We then selected 38 parts that had the largest repair cost increases for further review. We found that 31 of these parts were repaired at government facilities, and we obtained and analyzed their costs during fiscal years 1999 through 2002 as provided by either the Naval Inventory Control Point or the applicable Naval Aviation Depot. After finding that increased repair costs were due to higher material costs used in the repairs, we obtained detailed lists of the orders for these materials. To better understand the general reasons for the cost increases, we analyzed the quantities ordered and the prices paid for them during fiscal years 1998 through 2001. We also discussed the reasons for major material and labor cost increases with officials at the Naval Aviation Depots at Cherry Point, North Carolina, and Jacksonville, Florida. To assess the Navy’s progress in identifying and addressing the underlying causes for increased prices of spare parts, we (1) identified and reviewed prior GAO reports as well as Navy studies and initiatives relating to controlling total costs and (2) evaluated Navy actions to implement the recommendations of our November 2000 report. We obtained studies on the rising costs of repair parts and held discussions with responsible officials at the Center for Naval Analyses, the Naval Center for Cost Analysis, and the Naval Audit Service. We also discussed and obtained information on the status of the Navy’s Aviation Maintenance Supply Readiness Group’s efforts to address the repair part cost and reliability issues with Naval Air Systems Command officials as well as information on the status of corrective actions from the Navy’s Web Site. We also reviewed the Navy’s Logistics Transformation Plan for fiscal year 2000 and the Navy and Marine Corps’ report on the best commercial inventory practices for the third quarter of fiscal year 2001 to identify initiatives aimed at mitigating price increases. We discussed several of these and other initiatives with officials at the Naval Supply Systems Command, Naval Inventory Control Point, Naval Air Systems Command, and Naval Aviation Depots at Jacksonville, Florida, and Cherry Point, North Carolina. In evaluating the Navy’s progress in implementing our recommendations, we relied on information gathered on various studies and initiatives as well as on discussions with officials at Navy headquarters and the Naval Supply Systems Command. We did not independently verify the pricing data provided by the Naval Supply Systems Command or the Naval Aviation Depots. However, recognizing that it was official data, we took several steps to address its quality. Specifically, we tested the completeness of the data, looking for empty or questionable fields. We identified some discrepancies in the data and discussed them with Naval Supply Systems Command and depot officials. Where appropriate, we adjusted the data based on additional information they provided. We performed our review between June 2001 and May 2002 in accordance with generally accepted government auditing standards. The 453 most costly repair parts for the 3 aircraft and their engines, which we focused on in our November 2000 report, have continued to experience price increases since fiscal year 1999. Table 2 summarizes the average increase in the repair cost for the parts, the average increase in what the supply system charged its customers, as well as the annual rate of increase for the parts selected for review. Overall, the average increase in the price charged to customers for these parts was 37.2 percent between fiscal year 1999 and fiscal year 2002. Within the population of 453 parts, there were 195 parts that experienced a drop in the customer price between fiscal year 1999 and 2002. Table 3 summarizes the average decrease in the repair cost, the average decrease in what the supply system charged its customers, and the annual rate of decrease for the parts. The average decrease in price for these 195 parts was about 35 percent. Almost 60 percent, 258 of the 453 parts experienced an increase in price between fiscal year 1999 and 2002. Table 4 summarizes the average increase in the repair cost, the average in what the supply system charged its customers, and the annual rate of increase for these parts. Price increases for these 258 parts averaged 91.5 percent. Appendix V: Reported Repair Cost Increases for 31 Parts FY99 material cost ($) FY99 government repair cost ($) FY02 material cost($) FY02 government repair cost ($) Change in government repair (%) Change in material (%) FY99 material cost ($) FY99 government repair cost ($) FY02 material cost($) FY02 government repair cost ($) Change in material (%) Change in government repair (%) The Navy’s efforts to implement the recommendations from our November 2000 report on the rising prices of aviation depot-level repairable parts have been mixed. The report contained three recommendations: (1) the Secretary of Defense ensure that the Navy follow through on the results of its planned studies by identifying and implementing solutions to reduce and stabilize prices and surcharge rates, (2) the Secretary of Defense direct the Navy to allocate condemnation costs to the specific parts or groups of parts incurring the costs, and (3) the Secretary of Defense report to the Congress on the Navy’s progress in addressing these recommendations. The Navy has only partially implemented our first recommendation. The Navy has undertaken some cost controlling measures aimed at improving reliability and is implementing a serial number tracking program to improve inventory management, as discussed above. The Navy has implemented the second recommendation by adjusting its pricing practice such that condemnation costs are being allocated to specific groups of repairable parts. Beginning in fiscal year 1999, the Navy started allocating certain costs to the parts that incur those costs. Initially, transportation costs were allocated using this approach. The Navy began allocating condemnation and obsolescence costs in this manner in fiscal year 2000. At the same time, the Navy instituted a tiered pricing strategy to allocate general overhead costs and specific, identifiable costs based on the level of management required. These efforts have resulted in a better match of expenses with specific parts. In response to our third recommendation, the Navy has only partially reported the results of its efforts to implement the first two recommendations to the Congress. In its fiscal year 2003 budget submission, the Navy reported its efforts to allocate condemnation costs, as well as transportation and obsolescence costs, to specific groups of parts. In addition, the Navy reported it was taking action to limit the general overhead rate to 30 percent or less. However, the Navy did not report any specific actions to reduce and stabilize prices.
|
Since fiscal year 1999, the Navy's budget for repairing spare parts to support its aviation weapons systems has increased by about 50 percent, from $1.2 billion to $1.8 billion. Some military commands have asserted that the escalating cost of these parts has adversely impacted the funds available for the readiness of military forces. Overall, the prices for Navy repairable spare parts continue to climb for the three aircraft and their engines that GAO focused on in its November 2000 report. GAO's assessment of selected parts being repaired showed that while nearly 45 percent of the parts decreased in price, about 55 percent increased an average of 91.5 percent between fiscal year 1999 and 2002. The price increases were primarily due to the dramatically higher costs of the materials needed to repair spare parts, although other factors, such as overhead fees and labor rates, contributed. However, GAO could not determine the underlying causes for the rising material costs because the Navy's database lacked key information on each repair. The Navy's progress in developing an overall plan to identify and address the reasons for higher spare parts prices has been limited. It has not yet identified and implemented ways to reduce and stabilize prices. Further, the Navy has undertaken several initiatives, but most of these efforts focused on improving the reliability or the maintenance processes for repairing spare parts rather than on identifying why prices continue to rise. One initiative, the establishment of an automated serial number tracking system for spare parts, however, has potential for providing the specific information needed to determine why the spare parts prices are increasing and develop a strategy for stabilizing them. In addition, the Navy may learn from the Defense Logistics Agency's efforts to address causes for price increases--thereby allowing the Navy to better apply its resources supporting the readiness of the forces.
|
NARA is the successor agency to the National Archives Establishment, which was created in 1934, then incorporated into the General Services Administration in 1949 and renamed the National Archives and Records Service. NARA became an independent executive branch agency in 1985 in a move designed to give the Archivist greater autonomy to focus resources on the primary mission of preserving the country’s documentary heritage. NARA’s mission is to make the permanently valuable records of the government – in all media – available to the public, the President, Congress, and the courts for reference and research. The Federal Records Act defines a record as all books, papers, maps, photographs, machine readable materials, or other documentary materials, regardless of physical form, made or received by an agency in connection with the transaction of public business as evidence of the organization, functions, policies, decisions, procedures, operations, or other activities of the government.As a result, NARA preserves billions of pages of textual documents and numerous maps, photographs, videos, and computer records. Under the Federal Records Act, both NARA and federal agencies have responsibilities for records management. NARA must provide guidance and assistance to federal agencies on the creation, maintenance, use, and disposition of government records. Federal agencies are then responsible for ensuring that their records are created and preserved in accordance with the act. NARA and agency staff work together to identify and inventory an agency’s records to appraise the value of the records and determine how long they should be kept and under what conditions. We found that NARA and federal agencies are confronted with many ERM challenges, particularly technological issues. NARA must be able to receive electronic records from agencies, store them, and retrieve them when needed. Agencies must be able to create electronic records, store them, properly dispose of them when appropriate, and send valuable electronic records to NARA for archival storage. All of this must be done in the context of the rapidly changing technological environment. data files. However, now NARA estimates that some federal agencies, such as the Department of State and Department of the Treasury, are individually generating 10 times that many electronic records annually just in E-mail – and many of those records may need to be preserved by NARA. In addition to increasing volume, NARA must address some definitional problems, such as what constitutes an electronic record. In addition, because agencies follow no uniform hardware or software standards, NARA must be capable of accepting various formats from agencies and maintaining a continued capability of reading those records. The long- term preservation and retention of those electronic records is a challenge because of the difficulty in providing continued access to archived records over many generations of systems, because the average life of a typical software product is 2 to 5 years. NARA is also concerned about the authenticity and reliability of records transferred to NARA. NARA is not alone in facing ERM challenges, the agencies also must meet Federal Records Act responsibilities. Records management is the initial responsibility of the staff member who creates the record, whether the record is paper or electronic. Preservation of and access to that record then also becomes the responsibility of agency managers and agency records officers. Agencies must incorporate NARA’s guidance into their own recordkeeping systems. Agencies’ responsibilities are complicated by the decentralized nature of electronic records creation and control. For example, agencies’ employees send huge volumes of E-mail, and any of those messages deemed to be an official record must be preserved. Agencies must assign records management responsibilities, control multiple versions, and archive the messages. Agencies’ reactions to the challenges I just mentioned are varied. On the basis of our discussions with NARA and some agency officials, we learned that some agencies are waiting for more specific guidance from NARA while others are moving forward by looking for ways to better manage their electronic records. However, there has been no recent governmentwide survey to determine the extent of agencies’ ERM programs and capabilities or their compliance with the Federal Records Act. for several years to develop DOD’s ERM software standard, which is intended to help DOD employees determine what are records and how to properly preserve them. NARA endorsed the DOD standard in November 1998 as a tool that other agencies could use as a model until a final policy is issued by NARA. NARA, however, did not mandate that agencies use the DOD standard. The DOD standard (1) sets forth baseline functional requirements for records management application software; (2) defines required system interfaces and search criteria; and (3) describes the minimum records management requirements that must be met, according to current NARA regulations. A number of companies have records management application products that have been certified by DOD for meeting this standard. Other agencies have also been testing ERM software applications for their electronic records. For example, the National Aeronautics and Space Administration (NASA) and the Department of the Treasury’s Office of Thrift Supervision (OTS) have both tested ERM software with mixed results. Even though NARA is aware of what some agencies are doing – such as DOD, NASA, OTS, and some others -- it does not have governmentwide data on the records management capabilities and programs of federal agencies. NARA had planned to do a baseline assessment survey to collect such data on all agencies by the end of fiscal year 2000. The survey would have identified best practices at agencies and collected data on (1) program management and records management infrastructure, (2) guidance and training, (3) scheduling and implementation, and (4) electronic recordkeeping. NARA had planned to determine how well agencies were complying with requirements for retention, maintenance, disposal, retrieval/accessibility, and inventorying of electronic records. The Archivist decided, however, to temporarily postpone doing this baseline survey because he accorded higher priority to such activities as reengineering NARA’s business processes. NARA’s BPR will address its internal processes as well as guidance and interactions with agencies. scheduled to take 18 to 24 months -- is completed. Conducting the baseline survey now could provide valuable information for the BPR effort while also accomplishing the survey’s intended purpose of providing baseline data on where agencies are with regards to records management programs. NARA would also be in a better position in later years to assess the impacts of its BPR effort. In response to our draft report and in a September 17, 1999, letter to the Comptroller General, the Archivist said that much of this baseline data would not be relevant to BPR and therefore NARA would not collect it at this time. However, NARA does have plans to collect limited information from a sample of agencies after starting BPR. We continue to believe that the baseline data is necessary to give NARA the proper starting point for proceeding with its BPR. Because agencies vary in their implementation of ERM programs, the baseline survey would provide much richer data than the limited information collection effort now planned by NARA. Even though NARA lacks governmentwide data on how agencies are implementing ERM, NARA has already begun revising its guidance to agencies. Historically, NARA’s ERM guidance has been geared toward mainframes and databases, not personal computers. NARA’s electronic records guidance to agencies, which establishes the basic requirements for creation, maintenance, use, and disposition of electronic records, is found in the Code of Federal Regulations. In 1972, before the widespread use of personal computers in the government workplace, NARA issued guidance – General Records Schedule (GRS) 20 – on the preservation of electronic records. Several revisions occurred prior to a 1995 version which provided that after electronic records were placed in any recordkeeping system, the records could be deleted. In December 1996, a public interest group filed a complaint in federal district court challenging the 1995 guidance. disposed of under a general schedule. Thus, the court ruled GRS 20 “null and void.” Following the court’s ruling, NARA established an Electronic Records Working Group in March 1998 with a specific time frame to propose alternatives to GRS 20. In a subsequent ruling, the court ordered the NARA working group to have an implementation plan to the Archivist by September 30, 1998. In response to the working group’s recommendations, NARA agreed in September 1998 to take several actions: It issued a revision in the general records schedules on December 21, 1998, to authorize agencies’ disposal of certain administrative records (such as personnel, travel, and procurement) regardless of physical format, after creation of an official recordkeeping copy. It initiated a follow-on study group (made up of NARA staff, agency officials, and consultants) in January 1999 – Fast Track Development Project – intended to answer the immediate questions of agencies about ERM that can be solved relatively quickly. It issued NARA Bulletin 99-04 on March 25, 1999, to guide agencies on scheduling how long to keep electronic records of their program activities and certain administrative functions formerly covered under GRS 20. It drafted a new general records schedule for certain administrative records to document the management of information technology. NARA has received comments from agencies on the draft, and the draft is still under review by NARA and the Office of Management and Budget. NARA hopes to have this guidance issued by the end of 1999. On August 6, 1999, the U.S. Court of Appeals reversed the lower court’s decision and held that GRS 20 is valid. That reversal was not appealed by the public interest group. In response to the court of appeals decision, the Archivist said that NARA would continue in an orderly way to develop practical, workable strategies and methods for managing and preserving records in the electronic age and ensuring access to them. He said that NARA remains committed to working aggressively toward that goal. Our review of the ERM activities in four states and three foreign governments showed that approaches to ERM differ. These entities often did things differently from each other and/or NARA. longer needed by the individual agencies but are of archival value. Two of the states also emphasized the use of the Internet as a mechanism that allows both the archivist and the general public to determine where records may be found. State officials indicated that state law and administrative rules that they issue guide their records management requirements, but they also interact with NARA and other states to assist in determining their states’ policies. Our review of public documents from three foreign governments (Australia, Canada, and the United Kingdom) showed that although these countries share common challenges, they each have taken somewhat different approaches to ERM decisions. For example, Australia has strong central authority and decentralized custody of records, and it maintains a governmentwide locator system. Canada issues “vision statements” rather than specific policies, and individual agencies maintain their own electronic records until they have no more operational need for them. The United Kingdom established broad guidelines, which are put into practice by its individual agencies in partnership arrangement with its national archives. Realizing the common problems faced by all countries, NARA is part of international initiatives that are to study and make recommendations regarding ERM. In conclusion, it is obvious that NARA and federal agencies are being challenged to effectively and efficiently manage electronic records in an environment of rapidly changing technology and increasing volume of electronic records. It is certainly not an easy task. Much remains for NARA and the agencies to do as they tackle the issues I have discussed. We believe that NARA is moving in the right direction. However, because of the variance of ERM programs and activities across the government, we continue to believe that the Archivist should conduct the baseline assessment survey as we recommended in our July 1999 report. This survey would produce valuable information for NARA’s use during its critical BPR effort. A well-planned and successful BPR should be a stepping-stone for NARA as it moves into the next phase of its management of all records, particularly electronic. As you know, Mr. Chairman, NARA has not had concerted congressional oversight as an independent agency. Such oversight is essential to help NARA ensure that the official records of our country are properly maintained and preserved. I commend the efforts of this Subcommittee for holding this hearing and bringing the issues surrounding government records into the spotlight. I look forward to future hearings in this area. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have. Contacts and Acknowledgement For further information regarding this testimony, please contact L. Nye Stevens or Michael Jarvis at (202) 512-8676. Alan Stapleton, Warren Smith, and James Rebbe also made key contributions to this testimony. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO discussed the challenges that face the National Archives and Records Administration (NARA) and federal agencies in their efforts to manage the rapidly increasing volume of electronic records. GAO noted that: (1) NARA and federal agencies are confronted with many electronic records management (ERM) challenges, particularly technological issues; (2) NARA must be able to receive electronic records from agencies, store them, and retrieve them when needed; (3) agencies must be able to create electronic records, store them, properly dispose of them when appropriate, and send valuable electronic records to NARA for archival storage; (4) NARA officials told GAO that NARA needs to expand its capacity to accept the increasing volume of electronic records from agencies; (5) in addition to increasing volume, NARA must address some definitional problems, such as what constitutes an electronic record; (6) in addition, because agencies follow no uniform hardware or software standards, NARA must be capable of accepting various formats from agencies and maintaining a continued capability of reading those records; (7) NARA is not alone in facing ERM challenges, the agencies also must meet Federal Records Act responsibilities; (8) agencies must incorporate NARA's guidance into their own recordkeeping systems; (9) agencies' reactions to ERM challenges are varied; (10) on the basis of GAO's discussions with NARA and some agency officials, GAO learned that some agencies are waiting for more specific guidance from NARA while others are moving forward by looking for ways to better manage their electronic records; (11) even though NARA is aware of what some agencies are doing, it does not have governmentwide data on records management capabilities and programs of federal agencies; (12) NARA had planned to do a baseline survey to collect such data on all agencies by the end of fiscal year 2000; (13) the Archivist decided, however, to temporarily postpone doing this baseline survey because he accorded higher priority to such activities as reengineering NARA's business processes; (14) GAO recommended that NARA do the baseline survey as part of its reengineering process; (15) the Archivist stated that the baseline data would not be relevant to its reengineering efforts and therefore NARA would not collect it at this time; (16) even though NARA lacks governmentwide data on how agencies are implementing ERM, NARA has already begun revising its guidance to agencies; (17) GAO's review of the ERM activities in four states and three foreign governments showed that approaches to ERM differ; and (18) these entities often did things differently from each other and NARA.
|
The livelihood of cattle producers depends fundamentally on the price they receive for their product and their cost to produce it. But behind this simple arithmetic are a host of demand and supply factors that influence cattle prices and the costs of raising cattle. For instance, the outcome for producers depends on how consumer tastes affect the demand and price for beef. Producers’ fortunes also hinge on how weather affects the supply and cost of forage and feed grains. The long biological cycle for cattle means that producers have to make supply decisions about herd size long before animals are sold and prices are known. International trade in cattle and beef, competition from poultry, pork, and other protein sources for a place in the consumer’s shopping cart, and household income are also among the many demand factors that influence cattle prices and producers’ incomes. In addition, structural changes that have been reshaping segments of the industry are affecting cattle demand and supply. The four largest meatpacking firms now slaughter more than 80 percent of all steers and heifers, compared with 36 percent 20 years ago. Agreements between producers and meatpackers stipulating prices, number of cattle, and quality considerations are becoming more commonplace. Technological changes now enable packers to deliver shelf-ready products to grocers. Information technology is being used to conduct live-cattle auctions on the Internet. All these developments and more potentially influence the demand and supply of cattle, directly or indirectly affecting cattle prices and producers’ incomes. Many demand and supply factors can be considered in developing a model, or logical framework, to explain cattle prices and producers’ incomes. Which of these factors to include depends on the model’s purpose or the specific questions it is intended to answer. Data availability and the results of testing how well various factors explain prices and incomes also determine which factors to include in a model. Modeling frameworks can range from highly complex mathematical formulations to less formal meetings of the mind among a panel of experts. Feedlots specialize in feeding steers and heifers a concentrated diet of corn and other grains before the animals are slaughtered at the meatpacking plant. Typically, animals remain in feedlots until they weigh 950 to 1,250 pounds. Greater demand for these fed cattle, resulting from increased demand for beef, has a ripple effect throughout other cattle production stages. To supply more cattle to meatpackers, feedlots need more cattle from stocker or growing operations, which in many cases are integrated with cow-calf producers. Most of the calves that cow-calf producers supply for beef production are placed in these growing operations, where they take on weight while they pasture on grass and other forages. These feeder cattle are sent to feedlots when they weigh between 500 and 750 pounds (fig. 3 shows such cattle feeding at a feedlot trough). Increased demand for these feeder cattle by feedlots puts upward pressure on feeder cattle prices. In the face of increased demand, cow-calf producers raise more calves, sometimes relying on seedstock operators, who supply more breeding stock, such as bulls. Calves are usually weaned from cows when they weigh about 500 pounds. Figure 4 traces the movement of animals from breeding to processing and consumption. Thus, as the effects of an increase in consumer demand for beef unfold, prices, signaling this change in demand, eventually rise along the chain, depending on the strength of demand and the availability of supply, as depicted in figure 5. Figure 6 outlines the changes in retail beef, boxed beef, and slaughter prices from 1974 through 1999. Important connections exist also between the cattle and beef industry and other sectors of the economy. Some of the closest connections are with products that compete with beef, such as poultry and pork. Other close connections are with critical inputs to the cattle and beef industry, such as feed grains. Because the cattle and beef industry is a major user of feed grains, beef production is also affected by grain supplies and prices. Feed is a major cost component in cow-calf production. In addition, foreign demand and supply of beef and cattle interact with domestic demand and supply in determining cattle prices and producers’ incomes. The demand and supply relationships connecting various segments of the cattle and beef industry are changing in a number of ways. Some of the structural changes relate to how meatpackers procure cattle. Historically, cattle were bought and sold in a spot market. Most sales occurred at terminal markets and auctions with cattle ready for delivery on sale. More recently, this activity has shifted to feedlots, where packers purchase cattle directly from cattle owners or feedlot managers. Cattle procurement no longer relies solely on the spot market and now involves closer ties between packers and feedlots. Three procurement methods involving such closer ties are marketing agreements, forward contracts, and packer fed cattle. In a marketing agreement, a feedlot may sell cattle to a packer according to a prearranged schedule and price. Such agreements generally involve ongoing relationships between feedlots and packers for the sale of cattle rather than a single transaction. Prices paid for cattle are often determined by a formula, which may be based on prices paid for other cattle slaughtered at the meatpacker’s plant or publicly reported prices. In addition, price premiums and discounts may be paid that are based on cattle quality. In a forward contract, the packer and seller agree on future delivery of cattle, typically using a formula based on futures prices or publicly reported prices to set the contract’s base price. When the price is based on futures prices, the parties agree on a differential from futures prices, called the price basis. Premiums and discounts are applied for differences in cattle quality. Typically, feedlots and packers agree on delivery month, specific cattle to be delivered, cattle quality standards, and the price basis. Packers also slaughter cattle that they own themselves and feed in feedlots. Packers may also share ownership of cattle with individuals or feedlots where the cattle are fed. This arrangement, called vertical integration, goes a step further, supplanting the coordinated exchange relationship between feedlots and packers that characterizes marketing agreements and forward contracts with the meatpacker’s outright ownership of the cattle. Vertical integration also occurs when a single entity has ownership control of animal production, processing, and marketing beef products. Tying cattle prices to quality is called value-based pricing. It derives from the belief that traditional cattle pricing, relying on animal weight, does not adequately relay consumer preferences for quality and attendant price signals to producers. Grade and yield pricing is frequently used, which applies price premiums and discounts to a predetermined base price according to carcass attributes. Another slight variation is grid pricing, in which a base price is determined after the transaction between buyer and seller has been negotiated. In addition, some beef packers use the wholesale value of beef to determine the price they are willing to pay for cattle. What effect vertical coordination—through marketing agreements and forward contracts, vertical integration, and value-based pricing—is having on cattle prices and producers’ incomes has been debated by various industry analysts. For instance, some believe that marketing agreements and forward contracts have adversely affected prices paid for cattle bought in the spot market, while others hold that producers benefit from these arrangements. Some research suggests that rising levels of vertical coordination and integration can be traced to consolidation in the meatpacking and feedlot sectors. Another feature of structural change in the cattle and beef industry has been the consolidation of the meatpacking sector into fewer firms operating large production facilities able to slaughter half a million or more steers and heifers per year. Large plants accounted for less than 25 percent of steer and heifer slaughter in 1980 but more than 75 percent in 1995. A recent USDA study found that economies of scale help explain this increase in consolidation and market concentration in the meatpacking sector. USDA also found that large facilities are fabricating more meat products because they can do so at lower cost than meat wholesalers and retailers, the traditional carcass buyers. Market concentration measures total sales of the largest firms in a specific market or industry. The four largest meatpacking firms accounted for 36 percent of total commercial slaughter in 1980, 72 percent in 1990, and 81 percent in 1999, as seen in figure 7, which therefore can be seen as illustrating a rise in market concentration in the meatpacking sector over that period of time. Some analysts are concerned that greater concentration has led to fewer meatpackers bidding for cattle and offering lower prices. Others hold that technological change and cost economies are the most important factors driving the meatpacking sector and that market power associated with concentration has played a relatively minor role in determining cattle prices. Technological changes in the cattle and beef industry, according to USDA, are becoming an underlying cause of economies of scale in meatpacking. In a development directly affecting packers, retailers, and consumers, packaging and processing technology has enabled meatpackers to move from supplying boxed beef to firms that specialize in further processing to directly supplying case-ready meats, convenience products, often seasoned and marinated, and precooked products for immediate retail sale. In contrast, in the early 1970s, meatpacking plants were typically engaged only in slaughter, sending carcasses to wholesalers and retailers for processing into retail products. Packers have also begun marketing their products electronically. Another technological development that affects packers and producers directly is the electronic measurement of animal carcass quality, making it easier for packers to determine the grade and other characteristics of carcasses. In another development affecting producers and packers, cattle marketing has begun on the Internet. Cattle feeding through feed additives and computerized onsite feedmills and feeding operations represents yet more technological innovation. The consumption of beef and other meats has changed over time. A USDA study concluded that decreased demand for beef was a major reason for the larger increase in market concentration in the beef industry than in the pork industry. According to USDA, decreased demand for beef was an important incentive for meatpacking firms to seek cost savings through larger plants. As shown in figure 8, per capita beef consumption began falling in the mid-1970s but leveled off in the 1990s. During these two decades, per capita poultry consumption rose steadily while per capita pork consumption remained relatively stable. Meanwhile, retail beef prices were higher and remained higher than chicken and pork prices, as shown in figure 9. Although the United States is the largest beef producer in the world, and although its exports of beef to other nations have grown more rapidly than its imports, it is a net beef importer, as depicted in figure 10. Most beef exports from the United States are choice cuts, while most imports into the United States are used for ground beef. Beef exports rose from less than 1 percent of U.S. beef consumption in 1970 to 9 percent in 1999, seen in figure 11. Beef imports, in contrast, have ranged between 7 percent and 11 percent of U.S. commercial production since 1970, seen in figure 12. The United States imports more cattle than it exports, as seen in figure 13. The nations from which it imports cattle—Canada and Mexico—are, for all practical purposes, the same nations to which it exports cattle. Imports of cattle also made up a greater percentage of cattle slaughtered in the United States during the 1990s, as seen in figure 14. Cattle have the longest biological cycle of all meat animals. The cattle cycle (illustrated for 1930–2000 in fig. 15) refers to increases and decreases in herd size over time and is determined by expected cattle prices and the time needed to breed, birth, and raise cattle to market weight, among other things. The actions of individual producers to “time the market” by building up their herds in advance of expected cyclical peaks in cattle prices can also shape the cattle cycle. As figure 16 shows, cattle inventories have at times reached peak numbers before associated peaks in beef production, and while the number of cattle has fallen, beef production has risen. Figure 17 illustrates the cyclical movement that cattle prices have exhibited over time. They tend to move in a direction opposite to that of commercial cattle slaughter, as shown in figure 18. Economic modeling of the beef and cattle industry can take a variety of forms, depending on the questions asked. These questions define the purpose of a model. The purpose of modeling the cattle and beef industry can range from wanting accurate short-term forecasts of cattle prices to seeking information on how farm policy affects cattle producers. Models can also be designed to answer questions about the effects of structural change and international trade, to name two. Another critical issue determining the type of modeling has to do with judgments about how successful a model will be in answering relevant questions. Success depends on the availability and cost of acquiring reliable data to estimate key supply and demand relationships in the cattle and beef industry. In some cases, it also depends on the ability to isolate cause and effect in the model—for instance, being able to pinpoint what caused the decline in per capita beef consumption. Being able to accurately define and estimate cause and effect in a model is complicated by the possibility of multiple causes and the challenge of isolating each one’s effect. Limited knowledge about the processes being studied and changes in demand and supply relationships over time are important hurdles, as well. Success is also contingent on the quality of previous research. Models can consist of a single equation representing the link between current and past values of a variable for short-term forecasting purposes to frameworks consisting of many interrelated equations. The parameters of these equations—measuring, for example, how sensitive herd expansion is to rising feed costs—may be estimated by the statistical analysis of historical data in the course of building the model. Alternatively, parameter values may be based on the results of previous research or may be calibrated to replicate the data of a chosen benchmark year. The results of previous empirical research or calibration are often relied on when data are unavailable. Regardless of how simple or complex the modeling is, projections of key variables, such as cattle prices, typically reflect more than just running the model. An analyst’s judgment concerning the plausibility and consistency of a model’s results also plays an important role in deciding what projections to report. A pronounced example of this is the instance in which the modeling framework consists solely of an expert panel meeting periodically to reach consensus forecasts on variables of interest, after considering a variety of relevant information sources. Concerned that current models the government uses do not fully account for how some marketing practices and trade affect prices U.S. cattle producers receive for their livestock, Senator Daschle asked us to determine the extent to which economic models that USDA and ITC incorporate imports, concentration in the U.S. meatpacking industry, and marketing agreements and forward contracts in predicting domestic cattle prices; the most important factors affecting cattle prices and producers’ the most important data and modeling issues in developing a comprehensive analysis to project cattle prices and producers’ incomes. To determine the extent to which USDA’s and ITC’s economic models incorporate imports, market concentration, and marketing agreements and forward contracts, we obtained documentation on their relevant models. We also met with USDA and ITC officials to discuss these models. We examined the structure and specification of the models, including estimated equations, methods of estimation, estimation results, and information on data used for estimation. To address the second and third objectives, we convened a virtual panel on the Internet of 40 agricultural experts. We asked them (1) what the most important factors affecting cattle prices and producers’ incomes are and (2) what the most important data and modeling issues would be for developing a comprehensive analysis to project cattle prices and producers’ incomes. In selecting the panel, we generated a prospective list of experts, based on a literature review, referrals from USDA and ITC officials, and congressional sources. Of 48 experts we contacted, 42 agreed to participate. Forty experts completed all phases of our panel survey. To structure and gather opinions from the expert panel, we employed a modified version of the Delphi method. The Delphi method can be used in a number of settings, although when first developed at the RAND Corporation in the 1950s, it was applied in a group-discussion forum. One of the strengths of the Delphi method is its flexibility. Rather than employing face-to-face discussion, we used a version that incorporated an iterative and controlled feedback process, administering a series of three questionnaires over the Internet. We used this approach to eliminate the potential bias associated with live group discussions. The biasing effects of live discussions can include the dominance of individuals and group pressure for conformity. Moreover, by creating a virtual panel, we were able to include many more experts than we could have with an actual panel. This allowed us to obtain the broadest possible range of opinion. In the first questionnaire, in phase I, we asked the experts three open- ended questions: During the past few years, what were the most important factors or variables affecting (a) the prices received by domestic cattle producers and (b) producers’ incomes? If you were to conduct a comprehensive analysis of domestic cattle prices and producers’ incomes, are there other factors or variables not listed in question 1 that you would include? What problems or issues would you face in developing a comprehensive and reliable analysis to estimate domestic cattle prices and producers’ incomes? After they completed the first questionnaire, we analyzed their responses in order to compile a list of the most important factors affecting cattle prices and producers’ incomes, as well as key problems or issues facing analysis of prices and incomes. We combined the responses to the first two questions, organizing them into four categories—(1) domestic demand for cattle, (2) domestic supply of cattle, (3) international trade, and (4) structural change. While the last two categories overlapped the first two to some degree, we broke them out to directly link our first objective regarding USDA and ITC models to the experts’ responses. For the list of key problems or issues, we organized each item under either a data or a modeling issue. In the questionnaire in the second phase, experts rated the importance of each of the factors identified during the first phase. Our analysis of the data produced a ranking of most important factors and level of agreement about each factor’s importance (see app. III). During the second phase, we also asked the experts to evaluate issues facing the development of a comprehensive analysis identified during the first phase. They identified 41 data and modeling related issues (see app. IV). We asked the experts to rate each of these data and modeling issues by answering the following questions: How important is it to address this problem or issue for purposes of modeling cattle prices and/or producers’ incomes? How feasible is it to overcome or implement the solution for this problem or issue for purposes of modeling cattle prices and/or producers’ incomes? During the third phase, we presented the panel with the results of the questionnaires from phases I and II, including a summary of findings and descriptive statistics on the importance of the factors and the importance and feasibility ratings of the 41 data and modeling issues. We asked the experts to consider these results and give their opinions of why there was a greater divergence of opinion on the importance of structural change and international trade (see app. V for excerpts from their statements of opinion). After the panel members examined the results and considered the reasons for the variance of opinion on international trade and structural change, we offered the experts the opportunity to change their original assessments. Two panelists changed their opinions on structural change, and five changed their ratings on international trade. Regarding data and modeling issues, we asked each expert whether the federal government should take action to help overcome these issues. We asked those who believed that government action was warranted to select up to 5 issues from the 41 issues that had been identified. (The list of rank- ordered issues recommended for federal action is in app. V.) To ensure that the wording of the initial questions was unambiguous, three panel members pretested a paper version of the first questionnaire, and we made relevant changes before we deployed the first questionnaire on the Internet. We did not pretest subsequent questionnaires because they were based on the panel’s answers to preceding questionnaires. We did, however, review them before we deployed them. Some of the panelists may have cooperative agreements or other ongoing relationships with the federal government, trade groups, individual companies, or other organizations within the agricultural industry. In addition, some panel members may want to develop such relationships in the future. Therefore, to mitigate potential conflict of interest, the panel we convened was large enough to have a wide range of experience and views in the subject area. None of the panel members were compensated for their work on this project. USDA and ITC have several models for analyzing the cattle and beef industry. These models account for imports but do not incorporate market concentration, marketing agreements, and forward contracts because they were not designed to answer questions about these aspects of structural change. USDA’s models include a variety of domestic and international supply and demand variables to project U.S. cattle prices. One is a short- term model projecting up to 18 months into the future, and the other is a long-term model projecting up to 10 years. ITC’s models are used to investigate injury claims resulting from imports that sell in the United States at less than fair value or are subsidized and to conduct broad economic studies. USDA separately monitors and conducts research on how structural changes involving market concentration, marketing agreements, and forward contracts affect the cattle and beef industry. Each year, USDA publishes an agricultural baseline report with projections for the livestock sector, including cattle and beef. Changes in market concentration, marketing agreements, and forward contracts are not explicitly considered in making these projections. The baseline projections reflect a composite of results from various economic models and judgmental analysis. The projections of the livestock industry in the baseline are estimated by using USDA’s short-term and long-term livestock models. They are based on specific assumptions about the economy, agricultural policy, and international developments. They assume normal weather patterns. Current baseline projections also assume the continuation of the Federal Agricultural Improvement and Reform Act of 1996. As a result, these projections are a description of what to expect, given assumptions defining a baseline scenario. Commodity projections in the baseline are used to estimate the cost of farm programs needed to prepare the president’s budget. Baseline projections are also used to determine the incremental effects of proposed changes in agricultural policy. USDA’s Interagency Commodity Estimates Committee (ICEC) for meat animals makes short-term cattle price projections. The committee uses a data set that includes beef and cattle imports and exports but does not contain information on changes in market concentration, marketing agreements, and forward contracts. The committee consists of an official from the World Agricultural Outlook Board, who serves as the chair, and other members. Analysts from ERS make initial projections that the committee reviews. Consensus is reached, and final projections are included as the World Agricultural Supply and Demand Estimates forecast in USDA’s agricultural baseline report. In making initial projections, ERS starts by updating a historical database, compiling the most current information on production, prices, and trade statistics for the livestock industry. Monthly data are collected on the production of beef, veal, pork, lamb, and poultry and slaughter of steers, heifers, beef and dairy cows, broilers, hogs, and turkeys. Most data are obtained from USDA’s Agricultural Marketing Service (AMS) and National Agricultural Statistics Service (NASS). ERS supplements these monthly data with the latest information from daily and weekly releases, using numerous public and private sources. This data set, combined with the latest release on cattle inventories, class breakouts, and live and wholesale and retail prices, is used to make projections. The next step involves entering the updated data into a spreadsheet to simulate possible short-term scenarios for the livestock industry. Analysts’ judgments of current trends in the industry are used to select one scenario and corresponding projections to present at the monthly ICEC meeting. Committee members meet monthly to review ERS’ initial projections; they discuss whether recent information or developments related to weather, the national and industry economic outlook, and international trade suggest a need to revise these projections. The May meeting produces quarterly and annual projections through the following year. Meetings in subsequent months review projections approved the previous month that are then revised as needed. The committee’s chairperson sees his role as helping committee members reach consensus; however, the chair has overall responsibility for approving projections and will impose a decision if consensus cannot be reached. Projections from the October meeting are used in the 10-year baseline report. The most current available data on beef and cattle imports and exports are used in arriving at the short-term projections. However, these trade statistics are not as current as other data, being 6 weeks out of date when the Department of Commerce releases them. An ERS analyst said that to lessen the effect of this lag, it adjusts its trade forecasts by using the most recent releases and information on important trading partners and competitors, including currency rates, and changing supply conditions in other countries. Information on market concentration, marketing agreements, and forward contracts, while not part of the data set analyzed, we believe can be implicitly included in committee discussions. ERS uses its livestock model to make annual projections of the cattle and beef industry as well as the hog and poultry industries. It includes international trade in beef and cattle in the model but not market concentration, marketing agreements, and forward contracts. These projections are included in USDA’s baseline report. This model consists of equations specifying supply and demand relationships that affect the livestock sector. It was estimated initially with 1960–88 data. Production sectors supplying beef, pork, and poultry are modeled, along with demand for them. The demand sector consists of a consumer demand component, which determines retail prices, and another component derived from consumer demand, which determines wholesale and producer prices. Feedback from demand to production takes place through the effect of producer prices on returns to cow-calf producers. Production, supply, and demand variables are determined within the system of equations making up the model, while macroeconomic, trade, and feed variables are determined outside the model. An official from USDA who helped build the model said that emphasis was placed more on modeling production than on demand. Appendix II describes the model in detail. The largest component of the livestock model deals with the cattle and beef industry, including the size and composition of the cattle herd, commercial slaughter, beef production and consumption, and retail, wholesale, and cattle prices. For herd size and composition, the model contains equations explaining inventories of beef cows, calves, steers, heifers, and bulls. The inventory of beef cows is the main driver of the cattle and beef sector, helping determine the number of calves, steers, heifers, and slaughter. The number of animals slaughtered plus cattle imports and exports determine beef production. Domestic beef consumption is computed by first adding beef imports and beef inventories at the beginning of the year to beef production during the year and then subtracting from this beef exports and beef inventories at the end of the year. Beef, pork, and poultry consumption help determine retail beef prices. Retail beef prices are critical in explaining prices that meatpackers and cattle producers receive, which, in turn, are an important component of returns to cow-calf producers in the model. Returns to cow- calf producers help explain the number of beef cows and calves, beef cows slaughtered, and heifers added to the beef cow herd or slaughtered. The cost of feed comes into play at several places in the model. For example, hay and corn prices help explain the number of heifers added to the beef cow herd and the number of beef cows slaughtered. Feedlot costs also explain the number of steers slaughtered and feeder steer prices. In addition, feed and other input costs are used in determining returns to cow- calf producers. Feed cost projections come from USDA’s Food and Agricultural Policy Simulator (FAPSIM). Changes in market concentration, marketing agreements, and forward contracts are not explicitly included in any of these modeled relationships. International trade in beef and cattle is included, although values for these trade variables are determined outside the livestock model. Beef export and import projections are based on USDA’s link system model. USDA has not reestimated the livestock model in its entirety since 1990, when it was first developed. Much of the data used in the original estimation are from the 1960s and 1970s, before rapid consolidation in the meatpacking sector and increased use of marketing agreements and forward contracts. Reestimating the model using the most current data available would better reflect structural and other changes and would reveal whether estimated values of key model parameters change and result in different projections of cattle prices. Originally published in 1990, documentation for the livestock model contained estimation results, including standard errors for parameter estimates, T ratios, and R squares, described as “vital statistics of the model”. Including these statistics in model documentation is standard practice. Since the model was first estimated, some components of the model in the production and demand sectors have been modified. According to USDA officials familiar with the model, it was last modified about 1994. However, there is no documentation on how such vital statistics may have changed as a result of these modifications. The 1990 documentation also described the validation of the livestock model, noting that individual parameter estimates were obtained for 1960– 86 to test its forecasting ability during 1987–89. Validation measures such as mean percentage error and Theil’s relative change U1 statistics were reported, and the authors concluded that on the basis of these results, the model forecasted reasonably well. Since then, the model has not been further validated. An assistant administrator for ERS said that validating, or back casting, the current version of the model makes sense. Current documentation of the livestock model includes a listing of the equations and values for estimated parameters, seen in appendix II. USDA officials said that other documentation of the livestock model, including the data set used to estimate it, along with standard measures of statistical goodness of fit and other diagnostics of the model’s performance described above, were lost during a move to a new location. They also said that budgetary cuts led to a lack of resources needed to provide better documentation of the model, as well as to replace lost data. USDA officials said that lack of resources has also negatively affected the quality of documentation for FAPSIM and the link system model. ITC uses two types of models to analyze the cattle and beef industry. One type is a model to support its mandate to investigate domestic injury claims resulting from imports being subsidized or selling in the United States at less than fair value. The second type is a sector-specific model used to carry out broad economic studies, including those related to trade liberalization efforts. Neither type of model is detailed enough to project cattle prices or address the effects of structural changes associated with market concentration, marketing agreements, and forward contracts in the cattle and beef industry. When investigating domestic injury claims, ITC economists use COMPAS, a partial equilibrium model. COMPAS was designed to estimate how importers’ selling of a specific product below its fair price would affect price, sales, and revenue of that product in the competing domestic sector. Selling imports at less than fair value is sometimes referred to as dumping. COMPAS is also used to estimate the effects of governments’ subsidizing exports. To do so, COMPAS uses a standardized methodology, beginning with a supply and demand framework and assuming less than perfect substitutability between domestic and imported products. Values of demand and supply parameters needed to assess the effects of dumping are often obtained from other researchers’ estimates. ITC typically uses a range of estimated values for these parameters to reflect uncertainty. ITC commissioners may consider the results of this analysis in their deliberations. However, according to ITC officials, commissioners rely on the specifics of legal statutes and the record of facts collected during ITC’s investigation in reaching their decisions rather than on model results in assessing injury. ITC injury investigations involving dumping and subsidies must adhere to specific statutory criteria, procedures and time periods. The process starts with an interested party filing a petition with ITC and the Department of Commerce. For both dumping and subsidies investigations, ITC must make a preliminary determination of whether there is a “reasonable indication” that an industry is materially injured or threatened with material injury by the imports in question. If ITC’s determination is negative, the investigation ends. If it is affirmative, the investigation continues and Commerce makes a preliminary determination of whether there has been dumping or subsidies and, if so, a preliminary calculation of what the dumping or subsidy margin would be. Commerce continues the investigation, regardless of its preliminary findings, and makes a final determination of dumping or subsidies and a final calculation of margins. If Commerce’s final determination is affirmative, ITC continues its investigation and makes a final determination of material injury or threat of material injury. Recently, COMPAS was used, in response to a 1998 petition by the Ranchers–Cattlemen Action Legal Foundation and others, to investigate Canadian and Mexican cattle alleged to have been sold in the United States at less than fair value. ITC staff used a range of estimates representing supply, demand, and product substitution relationships in the U.S. cattle market. These estimates, along with data on market share, Commerce’s dumping margins, transportation costs, and tariffs, were incorporated in COMPAS to analyze the likely effects of unfair pricing of cattle imports on the U.S. cattle industry. In the absence of dumping, ITC estimated U.S prices would have been between 0.2 percent and 1.8 percent higher, U.S. cattle producers’ revenue would have been from 0.3 percent to 1.8 percent higher, and U.S. cattle producers’ output would have been between 0 and 0.4 percent higher. The commissioners determined that the industry was not materially injured or threatened with material injury by these imports. This 1998 investigation reveals some limitations in the COMPAS model for analyzing problems in the cattle and beef industry. ITC’s estimates of the effects of these imports relied on the value of the dumping margin Commerce determined and on supply and demand price elasticities (parties to the investigation are requested to provide feedback on these values and other expert sources are consulted). In the absence of a dumping investigation and data on a dumping margin, COMPAS cannot be readily applied to assess the effect of an import quantity surge. Furthermore, while COMPAS can be used to estimate the effect of price changes in the cattle or beef sector, the model does not explicitly link downstream beef-sector effects to the upstream cattle sector. COMPAS also does not explicitly account for changes in concentration in the meatpacking industry, marketing agreements, and forward contracts. The ITC 1998 investigation reveals other analytical issues. To account for uncertainty about the values of key parameters used in COMPAS, such as price elasticity or sensitivity of U.S. demand and supply of cattle and the extent to which imported cattle can be substituted for U.S. cattle, ITC used a fairly wide range of estimates for the parameters. In addition, while ITC was informed that imports affected some U.S. producers and regions more than others, published data at this level of detail are often unavailable, and most studies that have estimated price sensitivities used national data. ITC uses various models to carry out other economic studies examining the effects of broad trade policy changes, such as NAFTA. For example, ITC issued a study in 1997 on the effect of NAFTA and the Uruguay Round on U.S. trade of cattle and beef with Canada and Mexico, using an econometric model that estimated effects on trade volume, but did not estimate or predict effects on U.S. cattle prices. ITC has also used computable general equilibrium (CGE) models to assess the likely effects on various sectors of the U.S. economy from major trade liberalization. CGE models are generally not specific enough to predict cattle prices or to address structural changes associated with market concentration, marketing agreements, and forward contracts. The models that USDA and ITC use do not explicitly account for the structural changes occurring in the industry from greater concentration in the meatpacking industry and greater use of marketing agreements and forward contracts. According to USDA, its current research on these structural changes is inconclusive about how they affect cattle prices paid to cattle producers. USDA and others have conducted research on the effects of these structural changes on domestic cattle prices. Overall, research conducted by or for the Grain Inspection Packers and Stockyards Administration (GIPSA), a USDA agency, has not found conclusive evidence linking these changes to domestic cattle price changes. For example, GIPSA reported in 1996 that the findings of an extensive literature review were inconclusive concerning the effects of concentration, primarily because of limitations in methods or data in the research reviewed. This report also stated that while the body of evidence from the literature was insufficient to support a finding of noncompetitive behavior, GIPSA also could not conclude that the industry is competitive. The study recommended that future research focus more directly on data disaggregation at the firm and plant levels to provide a better understanding of the dynamics of individual firm behavior and rivalry between firms. Assessing competitiveness from available data was also difficult in an ERS study on the causes and effects of consolidation and concentration. While this analysis did not support conclusions about the exercise of market power by beef packers, even though no other manufacturing industry showed as large an increase in concentration since the U.S. Bureau of the Census began regularly publishing concentration data in 1947, it also concluded that models need to be improved to more fully incorporate relevant determinants of company behavior. Difficulty in assessing the competitiveness from available data held true for another study entitled Effects of Concentration on Prices Paid for Cattle, contracted for by GIPSA. The study’s summary states: “The analysis did not support any conclusions about the exercise of market power by beef packers. It appears that improved models are needed to more fully incorporate relevant determinants of firms’ behavior”. The ERS study, using data from the Census of Manufacturers for 1963–92, found that meatpackers had shifted toward larger plants that annually slaughtered at least half a million steers and heifers. The study found that scale economies were modest but extensive. The largest meatpacking plants maintained only small cost advantages (1 to 3 percent) over smaller plants, but these modest scale economies appeared to extend throughout all sizes of 1992 plants. According to ERS, if larger meatpackers realize lower costs, then concentration, by reducing industry costs, can lead to improved prices for consumers and livestock producers. However, because meatpackers face fewer competitors, they could reduce prices paid to livestock producers, and they might be able to raise meat prices charged to wholesalers and retailers. Another study, sponsored by GIPSA, examined the underlying cost relationship believed to motivate packer behavior. This study used monthly cost and revenue data for 1992–93 from a GIPSA survey of the 43 largest U.S. beef packing plants. Estimates from this study indicated significant cost economies and little if any depression of cattle prices or excess profitability in the meatpacking industry. GIPSA has also studied the effects on cattle prices of the greater use of marketing agreements and forward contracts. Some of these studies have found an inverse or negative relationship between captive supplies, which encompass marketing agreements and forward contracts, and spot market prices, but none has yet shown that captive supplies cause low spot or cash market prices. For example, GIPSA entered into a cooperative agreement in March 1998 with economists from two universities. The agreement was to conduct an econometric analysis of Texas cattle data to determine whether marketing agreements and other contracting methods for procuring cattle (captive supplies) had an adverse effect on the prices paid for cattle on the spot market. The researchers said that their statistical analysis did not support the notion that reducing captive supply purchases or increasing spot market purchases would result in an increase in the spot price. Cattle production is an important part of American agriculture. Industry participants rely on USDA data and modeling results when they base their future decisions on how best to plan and operate their businesses. However, the primary model USDA uses for projecting critical information that the industry needs has not been well maintained. The model has not been reestimated in its entirety and has not been validated by comparing its projections with actual results since its construction in 1989, despite significant changes in the structure of the industry. Data sets used to estimate the livestock model along with standard measures of statistical goodness of fit and other diagnostics of model performance have been lost, and USDA has no plans to replace them. Statistical goodness of fit and other diagnostics are also unavailable for USDA’s link system and FAPSIM models, which provide key information for the livestock model. This information is critical to model evaluation, and its maintenance simply constitutes good housekeeping. This lack of transparency carries with it the risk that projections will be perceived as emanating from a black box. To help ensure that models USDA uses to project cattle prices are properly maintained and reflect the most current information on the cattle and beef industry, we recommend that the secretary of agriculture direct ERS to periodically reestimate and validate the livestock model. To ensure that models USDA uses to project cattle prices are properly documented, we recommend that the secretary of agriculture direct ERS to provide basic documentation on these models. This would include documenting (1) the data set used to estimate the model, (2) standard measures of statistical goodness of fit and other diagnostics of model performance, and (3) any changes made to improve or otherwise update the model. See appendix VII. The expert panel we convened to identify the most important factors affecting cattle prices and producers’ incomes listed numerous demand and supply factors, including market concentration, marketing agreements, forward contracts, and international trade. Many of the most important factors cause consumer demand for beef to move up or down, in turn pulling cattle prices and producers’ revenues up or down. On the supply side, the most important factors motivate producers to contract or expand herd size, in turn pushing cattle prices up or down. The panel enumerated key input costs, which, together with producers’ revenues, determine incomes. Other important demand and supply factors underscore the effects that feedlots, meatpackers, and retailers may have on cattle prices and producers’ incomes. The panel also identified key international trade factors that affect cattle demand and supply. Appendix III contains a complete list of how the 40 panelists scored all factors in importance. The factors the panel identified can be summarized under four broad, overlapping headings: domestic cattle demand, domestic cattle supply, international trade, and structural change. Structural change includes changes in market concentration and growing use of marketing agreements and forward contracts, all of which have been associated with industrialization in the agricultural sector. A characteristic of industrialization is a trend toward standardized methods of production and economies of scale, as when production costs decline as plant size increases. The panel believed that domestic cattle demand and supply are the fundamental forces driving cattle prices and producers’ incomes. Ninety- five percent or more considered that these demand and supply factors were important or most important (see fig. 19). (We had asked the panelists to rate each factor as least important, somewhat important, moderately important, important, or most important.) The panelists agreed less about the importance of international trade and structural change (fig. 20). While 31 percent of the panel designated structural change important or most important, 30 percent believed it somewhat or least important. Forty percent rated structural change moderately important. A similar result held for international trade, with 28 percent rating it important or most important and 41 percent judging it somewhat or least important. The panel pointed out a number of important factors that influence consumer demand for beef, which has a cascading effect on the demand for cattle. As consumer demand for beef rises or falls, so does the demand for cattle. Changes in the demand for cattle directly affect cattle prices and cattle sales revenues, an important source of producers’ income. Figure 21 shows that more than half the panel believed that consumer preferences, the prices of substitutes for beef, and health concerns tied to food safety and diet were important or the most important determinants of cattle prices and producers’ incomes as they affected consumer demand. Ninety- five percent of the panel viewed product quality and 79 percent saw product convenience as important or most important in driving consumer preferences. Poultry and pork were the most significant substitutes for beef, with nearly 80 percent of the panel rating poultry and pork prices important or most important. The panelists also identified a number of other factors in the retail and meatpacking sectors that influence cattle prices and producers’ incomes through their effect on the demand for cattle and beef. The majority of the panel believed that the degree to which meatpacking plants were being used—packer capacity utilization—and the costs of retailing beef products were important or most important through their influence on meatpackers’ demand for cattle and retailers’ demand for beef (see fig. 22). Forty percent of the panel believed that by-product values, such as hides, were important or most important, while 29 percent judged that the wages meatpackers paid were important or most important. We asked the panelists to judge the importance of these factors separately from any effects that related structural change, such as economies of scale, might have. The panel pointed out a number of important factors that influence producers’ decisions about how many cattle to supply to the market. Changes in the supply of cattle directly affect cattle prices. Figure 23 suggests that producers’ decisions are set by how much it costs to produce cattle with certain quality characteristics and by the prices they expect to receive for those cattle. Producers’ incomes are determined after subtracting input costs from sales revenues. Expected prices are important because the relatively long biological cycle of cattle makes it necessary for producers to make decisions about herd size months and even years before they sell animals or know their prices. The cattle cycle, referring to increases and decreases in herd size over time, is determined by expected cattle prices and the time it takes to breed, birth, and raise cattle to market weight, among other things. The underlying risk in producers’ decisions leads producers to use risk management techniques and participate in futures markets, where producers can lock in futures prices as a hedge against the possibility of receiving prices lower than they expect. Technological changes have also been a factor. Growth hormones and new methods of measuring carcass quality are examples of production technology. Advances in computer technology have meant enhanced marketing capabilities. The panel believed that feeding cattle was the most significant input cost, with 100 percent rating feed costs and 53 percent rating forage costs important or most important. Eighty-three percent of the panel viewed weather and 73 percent saw grain and oilseed policies as important or most important in their influence on feed costs. Eighty-one percent of the panel judged weather to be important or most important in affecting forage costs. Ninety percent of the panel judged grade and 81 percent saw yield as important or most important factors affecting cattle quality. The panel believed that exports and imports of beef and live cattle affect domestic prices and producers’ incomes. Seventy-one percent regarded beef exports as important or most important (fig. 24). These exports, representing foreign demand for U.S. beef, affect cattle demand and prices through their effect on beef prices. An increase in beef exports raises beef prices, which in turn increase the demand for cattle and raise cattle prices. Beef imports, representing the foreign supply of beef, also affect domestic demand for cattle through their effect on beef prices. For example, an increase in beef imports causes beef prices to fall, which in turn reduces the domestic demand for cattle and causes cattle prices to fall. Exports of live cattle, representing foreign demand for U.S. cattle, and imports of live cattle, representing the foreign supply of cattle to the United States, directly affect cattle prices. As for the components of international trade, the panelists agreed more about the importance of beef exports than about the importance of beef imports and cattle exports and imports. Seventy-one percent rated beef exports important or most important, with 8 percent voting somewhat important and none checking least important. In contrast, 32 percent believed beef imports were important or most important, while 32 percent believed they were somewhat or least important. Seventy-eight percent of the panel believed exports of live cattle were somewhat or least important, while 8 percent rated cattle exports important or most important. Forty- seven percent believed cattle imports were somewhat or least important, while 16 percent believed they were important or most important. We also asked the panel to assess the importance of international trade 20 and 10 years ago and 5 years from now in determining cattle prices and producers’ incomes. Most panelists believed that international trade was less important 20 years ago than 10 years ago and believed that it will be more important 5 years from now (fig. 25). For instance, nearly half the panel thought that international trade will be important or most important 5 years from now. In contrast, 95 percent believed that international trade was somewhat or least important 20 years ago. In addition, the panel pointed out several factors that influence how much U.S. beef other nations buy compared with how much foreign beef the United States buys. They thought trade barriers were the most significant factor determining the difference between beef exports and imports, with 81 percent of the panel regarding these barriers as important or most important. The majority of the panel viewed currency exchange rates, foreign income, disease, and the use of hormones as important or most important in affecting net imports of beef. The panel also thought trade barriers were the most significant determinant of trade in live cattle between the United States and other nations, with 65 percent rating it important or most important. Fifty-five percent assessed disease as important or most important in determining trade in live cattle. The panelists identified numerous factors that may have altered the structure of the demand and supply relationships that link the prices and incomes that cattle producers receive to the actions that meatpackers, retailers, and consumers take. We have already discussed some of these factors, such as growing consumer awareness of health and food safety issues and greater emphasis on product convenience. The panelists also cited the consolidation of the meatpacking sector into fewer firms operating larger plants and vertical coordination among meatpackers, producers, and retailers. Figure 26 lists a number of factors that researchers have (1) scrutinized in recent years for their potential effect on cattle prices and producers’ incomes and (2) associated with structural change; the figure shows how important the panel believed these factors are. Economies of scale is the most significant factor associated with structural change in the cattle and beef industry—72 percent of the panel viewed it as important or most important. It was viewed as especially important in meatpacking, where 85 percent of the panel judged it to be important or most important. Some researchers believe that economies of scale and other types of cost economies have been important factors driving the meatpacking sector and that market power associated with concentration has played a relatively minor role in determining cattle prices. Technological change, sometimes associated with economies of scale, is also important, especially in meatpacker production, where 76 percent of the panel viewed it as important or most important. The panel judged concentration to be more important in the meatpacking sector, where the majority thought it important or most important. The panel judged it less important in the retail and feedlot sectors. Efficiency of the supply chain—another factor sometimes associated with structural change and referring to the distribution system that moves products beyond the farm gate to the final point of consumption—is also important. Sixty percent of the panel rated it important or most important. Some believe that greater efficiency in the distribution system has an upward effect on cattle prices. Almost half the panel thought that vertical coordination, involving the use of marketing agreements and forward contracts as well as value-based marketing and pricing, was important or most important. Value-based marketing and pricing scored highest in importance among this type of coordination, with 70 percent of the panel rating it important or most important. Debate has been considerable about what effect vertical coordination has on cattle prices. Some believe that thin spot markets for cattle result from increased vertical coordination between meatpackers and cattle producers, leading to lower spot prices for cattle and, through pricing formulas, to lower prices in marketing agreements and forward contracts. Other analysts disagree. Forty-three percent of the panel viewed thin spot markets as important or most important. Thinness in markets generally refers to a relatively small volume of market transactions and relatively high price volatility. In assessing structural change, the panelists agreed less about the importance of industry concentration and thin spot markets than about the importance of economies of scale. While 35 percent believed that concentration was important or most important, 43 percent believed it somewhat or least important. Similarly, 43 percent believed thin spot markets were important or most important, while 38 percent labeled them somewhat or least important. In contrast, 72 percent of the panel assessed economies of scale as important or most important, 8 percent somewhat important, and none least important. We asked the panel to assess the importance of structural change 20 years ago, 10 years ago, and 5 years from now in determining cattle prices and producers’ incomes. Most panelists believed that structural change was less important 20 years ago than 10 years ago and believed that it will be more important 5 years from now (fig. 27). For instance, nearly half the panel thought that structural change will be important or most important 5 years from now. In contrast, nearly half the panel believed that structural change was somewhat or least important 20 years ago. The expert panel we convened identified numerous demand and supply factors that it believed to be important determinants of cattle prices and producers’ incomes. The panel’s findings underscore the importance of demand and supply relationships throughout the cattle and beef industry, from cow-calf producer to retail consumer. Some factors that the panel scored relatively high in importance are included in USDA’s livestock model—such as feed costs and cattle inventory features of the cattle cycle—while others—such as product quality and the convenience aspects of consumer demand and grade and yield characteristics of cattle quality— are not explicitly covered. Economies of scale, capacity utilization in meatpacking, costs of retailing beef products, and value-based marketing are some of the other factors that the panel scored relatively high in importance but that the livestock model does not specifically address. The panel also believed that international trade and structural change will become more important in the future, with implications for future modeling. For factors not included in the livestock model, it is unclear to what extent their influence is captured indirectly. For example, in the livestock model, the retail price of beef and, therefore, cattle prices are influenced by the consumption of beef, pork, and poultry, which depends on consumer preferences. Similarly, the effects of economies of scale and market concentration may be hidden in the relationship between boxed beef prices, which represent prices meatpackers receive for their products, and cattle prices. However, because the livestock model does not explicitly account for these factors, it is not equipped to shed light on their relative importance when it attempts to explain and project cattle prices. There is no ready way to know how important these excluded factors are in the cattle price projections of the livestock model. To improve USDA’s ability to answer questions about the current and future state of the cattle and beef industry, we recommend that the secretary of agriculture direct ERS to (1) review the findings of our expert panel regarding important factors affecting cattle prices and producers’ incomes and (2) prepare a plan for how to address these factors in future modeling analyses of the cattle and beef industry. See appendix VII. When we asked the expert panel to identify problems in developing a comprehensive and reliable analysis for projecting the most important factors that affect cattle prices and producers’ incomes, the panel mentioned many modeling and data issues. Some pointed to a web of demand and supply connections that tie producers to packers, retailers, and consumers and to gaps in how much we know about how these connections affect cattle producers. Much of what the panel pointed to deals directly or indirectly with structural change. Other panel members pointed to the need for better data for analyzing consumer demand. They cited a number of problems regarding cattle supply and prices and international trade. An overarching issue was whether one all-encompassing model can adequately address the variety of questions that policymakers and stakeholders raise. Altogether, the panel identified 41 modeling and data issues. Appendix IV lists them all and their scores by importance and feasibility of resolution. From this list, the panel identified a number of actions it believed the government should take to advance our knowledge in this area; the actions focus primarily on the need for better data. Good data are basic to any comprehensive analysis of cattle prices and producers’ incomes. In the absence of good data, the most sophisticated method of analysis is likely to produce questionable results. The panel indicated that analyzing cattle prices and producers’ incomes extends beyond the confines of cow-calf producers, stockers, and feedlots. Table 1 lists modeling and data issues emphasizing the interrelated nature of the cattle and beef industry and, with it, the role of structural change. The panel’s comments suggested that policymakers, stakeholders, and others concerned about the industry now have a limited ability to analyze structural change and assess how it affects cattle prices and producers’ incomes. A majority of the panel believe that the unavailability of or inaccessibility to detailed data linking information on producers, processors, and retailers is an important problem in conducting a comprehensive analysis of changes to the cattle and beef industry. The U.S. Census Bureau collects data on establishments and firms for parts of the cattle and beef industry, including animal slaughtering and processing, grocery and related product wholesalers, retail food stores, and restaurants. Every 5 years, the bureau conducts a census that it supplements monthly and annually by sample surveys. For instance, the census of manufacturing, which includes animal slaughtering and processing, collects data on the value of shipments, payroll and employment by location, products shipped, the cost of materials, inventories, capital expenditures and depreciable assets, fuel and energy costs, hours worked, payroll supplements, and rental payments. Fewer data are collected from the censuses on wholesale and retail trade and food services. In addition, the monthly and annual surveys contain less information than the 5-year census. Individual panelists’ remarks suggest that these censuses do not contain sufficiently detailed information on the cattle and beef industry. The panel believed that poor retail data and the difficulty of quantifying factors that influence consumer demand hinder making accurate model projections (see table 2). Given the importance that the panel gave to consumer demand for beef, including the role of consumer preferences, product convenience, and health concerns, making progress in this area could improve model projections of cattle prices and producers’ incomes. Individual panelists’ remarks indicate that retail data may lack consistent retail-level micro detail on prices and sales of fresh meats. Some private sources of retail data, such as Information Resources, Inc., offer data on sales and pricing, collected weekly from supermarkets across the United States. These data, from grocery store scanners, reflect actual consumer purchases at both regular and sale prices. In addition, USDA reports retail prices for beef, but these prices reflect not actual purchases by consumers but, rather, an average of selected beef cuts offered for sale, without regard to the amount purchased. USDA first obtains average retail prices from the Bureau of Labor Statistics, which collects them to calculate the consumer price index (CPI). The bureau collects regular and sales prices from grocery stores and averages these prices, regardless of the amount purchased at each price. Then, USDA weights these prices by each cut’s proportion of a cattle carcass. As a result, USDA does not report retail prices on the basis of actual consumer purchases of beef products. The lack of current-period quantity-weighted retail prices, which the panel cited, has been a problem in the pork industry, too. The panel identified several issues important in modeling cattle supply related to the cattle cycle, expectations, and long-term variables dealing with technological change and policy changes in feed crops (see table 3). In addition, it cited problems with cattle prices, suggesting that vertical coordination in the form of contracts and value-based marketing is reducing how representative reported prices are (see table 4). The panel also pointed to problems with cattle price data not being adjusted for volume and grade—a cattle quality consideration we noted in chapter 3. We have discussed similar problems with hog prices. In April 2001, USDA’s AMS began collecting and reporting cattle and other livestock market data, including prices, under the livestock mandatory reporting (LMR) program, as required by the Livestock Mandatory Price Reporting Act of 1999. Unlike AMS’s previous voluntary market news program, which relied on industry cooperation to obtain information on negotiated or cash sales, LMR is collecting data from meatpackers on purchase prices in forward contracts and other transactions using price formulas, such as those found in marketing agreements. Under the LMR program, AMS is also collecting data on the quantity of cattle purchased on a live weight and carcass basis, cattle weight, the quality grade of cattle, and price premiums or discounts. These data may help in future modeling efforts. The panel identified international trade issues, such as the difficulty of quantifying the effects of trade barriers, as a factor in modeling (see table 5). Difficulty quantifying the effects of trade barriers could be significant in light of the panel’s assessment of their importance in determining beef net exports and trade in live cattle. Table 6 presents important questions the panelists raised about the purpose of modeling cattle prices and producers’ incomes and the feasibility of developing a “one size fits all” model. This is relevant in evaluating USDA’s and ITC’s models because they were not designed to answer questions about the effects of market concentration, marketing agreements, and forward contracts. In addition, these models are national in scope and were not designed to analyze regional effects. Eighty-five percent of the panelists believed that government action is needed to resolve the data and modeling issues they identified as problems in developing a comprehensive and reliable analysis of cattle prices and producers’ incomes. All who recommended government action pointed to the need for better data for conducting analysis. The panelists expressed concern about the availability of and access to data at all levels of the demand and supply chain that links producers to consumers. They also stressed that the quality of the data that are now being collected on the cattle and beef industry could be improved, citing the need for more representative, reliable, and consistent data. These data issues are important because, as one panelist succinctly said: “The results of the models are only as good as the data used to estimate them.” Table 7 lists the top five issues that the panelists believed warrant government action. Ninety-four percent of those who cited the need for government action selected one or more of the data issues in table 7. Appendix V presents the panelists’ own descriptions of their beliefs about these issues. Proprietary or confidential data, the first issue in table 7 and the one receiving the most votes for government action, is relevant to the second and fifth issues in table 7, dealing with cattle prices, because of contracting for cattle. It is an issue that the Livestock Mandatory Price Reporting Act addresses, under which USDA is required to publish data on cattle prices in a manner that protects the identity of those who report them and preserves the confidentiality of proprietary transactions. USDA has tried to preserve confidentiality by reporting data only if at least three reporting entities supply the information and no single entity is responsible for reporting 60 percent or more of the data. According to USDA, this resulted in the withholding of nearly 30 percent of the daily swine and cattle reports from publication, because of confidentiality, between April 2 and June 14, 2001. To reduce the amount of data being withheld, USDA recently announced a new confidentiality guideline; it believes that had this guideline been in place earlier, less than 2 percent of the daily swine and cattle reports would have been withheld from publication during that period. The panelists also offered general and specific comments about how the government can help address the issues it identified in table 7. Table 8 enumerates some of these comments. Appendix V presents excerpts of all the panelists’ comments. The panelists expressed a range of views about the federal government’s primary role in addressing the question of what the government should do about data and modeling issues. Some panelists commented that the government should emphasize data collection, while others saw the need for more government analysis as well. Table 9 presents some of their specific comments. The expert panel we convened identified numerous data and modeling issues that need to be addressed if a more comprehensive analysis of the cattle and beef industry is to be conducted. However, the panel emphasized the importance of carefully defining the questions for which answers are to be sought before an ambitious data collection and modeling effort is embarked on. The majority of the panel believed that the federal government should take steps to improve the quantity and quality of data that are available to researchers so that their understanding of the factors that explain cattle prices and producers’ incomes will be better. To improve USDA’s ability—and that of the research community as a whole—to answer questions about the current and future state of the cattle and beef industry, we recommend that the secretary of agriculture direct AMS, ERS, GIPSA, and NASS to (1) review the findings of our expert panel regarding important data and modeling issues and, (2) in consultation with other government departments or agencies responsible for collecting relevant data, prepare a plan for addressing the most important data issues that the panel recommended for government action, considering the costs and benefits of such data improvements, including tradeoffs in departmental priorities and reporting burdens. See appendix VII.
|
Concerns have been raised that the economic models used by the U.S. Department of Agriculture (USDA) and the U.S. International Trade Commission do not account for all the factors that affect cattle prices and producer incomes. GAO reviewed USDA's livestock model to determine whether it incorporates imports, market concentration, marketing agreements, and forward contracts. In reviewing best modeling practices, GAO's expert panel concluded that domestic cattle demand and supply were the fundamental forces driving cattle prices and producer incomes. The panel identified issues necessary to develop a comprehensive modeling system that predicts cattle prices and producer incomes. The panel recommended the collection of better data to quantify several important factors omitted from the model. The panel also wanted to see a more complete characterization of the supply and demand relationships connecting the cattle producer to the final consumer. The panel's emphasis on a more complete characterization of the cattle and beef industry underscores the idea that the demand for cattle is ultimately driven by consumer demand for beef and other demand and supply forces linking cattle producers to feedlots, meatpackers, and retailers.
|
As a result of a 1995 Defense Base Closure and Realignment Commission decision, Kelly Air Force Base, Texas, is to be realigned and the San Antonio Air Logistics Center, including the Air Force maintenance depot, is to be closed by 2001. Additionally, McClellan Air Force Base, California, and the Sacramento Air Logistics Center, including the Air Force maintenance depot, is to be closed by July 2001. To mitigate the impact of the closures on the local communities and center employees, in 1995 the administration announced its decision to maintain certain employment levels at these locations. Privatization-in-place was one initiative for retaining these employment goals. Since that decision, Congress and the administration have debated the process and procedures for deciding where and by whom the workloads at the closing depots should be performed. Central to this debate are concerns about the excess facility capacity at the Air Force’s three remaining maintenance depots and the legislative requirement— 10 U.S.C. 2469—that for workloads exceeding $3 million in value, a public-private competition must be held before the workloads can be moved from a public depot to a private sector company. Because of congressional concerns raised in 1996, the Air Force revised its privatization-in-place plans to provide for competitions between the public and private sectors as a means to decide where the depot maintenance workloads would be performed. The first competition was for the C-5 aircraft depot maintenance workload, which the Air Force awarded to the Warner Robins depot in Georgia on September 4, 1997. During 1997, Congress continued to oversee DOD’s strategy for allocating workloads currently performed at the closing depots. The 1998 Defense Authorization Act required that we and DOD analyze various issues related to the competitions at the closing depots and report to Congress concerning several areas. First, within 60 days of its enactment, the Defense Authorization Act requires us to review the C-5 aircraft workload competition and subsequent award to the Warner Robins Air Logistics Center and report to Congress on whether (1) the procedures used provided an equal opportunity for offerors without regard to performance location; (2) procedures are in compliance with applicable law and the FAR; and (3) award results in the lowest total cost to DOD. Second, the act provides that a solicitation may be issued for a single contract for the performance of multiple depot-level maintenance or repair workloads. However, the Secretary of Defense must first (1) determine in writing that the individual workloads cannot as logically and economically be performed without combination by sources that are potentially qualified to submit an offer and to be awarded a contract to perform those individual workloads and (2) submit a report to Congress setting forth the reasons for the determination. Further, the Air Force cannot issue a solicitation for combined workloads until at least 60 days after the Secretary submits the required report. Third, the authorization act also provides special procedures for the public-private competitions for the San Antonio and Sacramento workloads. For example, total estimated direct and indirect cost and savings to DOD must be considered in any evaluation. Further, no offeror may be given preferential consideration for, or be limited to, performing the workload at a particular location. As previously stated, the act also requires that we review the solicitations and the competitions to determine if DOD has complied with the act and applicable law. We must provide a status report on the Sacramento and San Antonio competitions within 45 days after the Air Force issues the solicitations, and our evaluations of the completed competitions are due 45 days after the award for each workload. Finally, the act requires that DOD report on the procedures established for the Sacramento and San Antonio competitions and on the Department’s planned allocation of workloads performed at the closing depots as of July 1, 1995. DOD issued these reports on February 3, 1998. The Air Force cannot issue final solicitations until at least 30 days after these reports are submitted and all other requirements of the act are completed. We have had problems in gaining access to information required to respond to reporting requirements under the 1998 National Defense Authorization Act. Our lack of access to information is seriously impairing our ability to carry out our reporting responsibilities under this act. We experienced this problem in doing our work for our recent report to Congress concerning DOD’s determination to combine individual workloads at the two closing logistics centers into a single solicitation. We originally requested access to and copies of contractor-prepared studies involving depot workloads at the Sacramento Air Logistics Center on December 18, 1997. The Air Force denied our request, citing concerns regarding the release of proprietary and competition-sensitive data. It was not until January 14, 1998, and only after we had sent a formal demand letter to the Secretary of Defense on January 8, 1998, that the Air Force agreed to allow us to review the studies. Even then, however, the Air Force limited our review to reading the documents in Air Force offices and required that without further permission, no notes, copies, or other materials could leave those premises. The limited access provided came so late that we were unable to review the documents adequately and still meet our statutorily mandated reporting deadline of January 20. As of this date, we have been provided only heavily redacted pages from two studies. These pages do not contain the information we need. Further, the Air Force did not provide us even limited access to the final phase of the studies, which were dated December 15, 1997. Although we were able, with difficulty, to complete our report, we simply cannot fulfill our responsibilities adequately and in a timely manner unless we receive full cooperation of the Department. To meet our remaining statutory requirements, we have requested several documents and other information related to the upcoming competitions for the closing depots’ workloads. Air Force officials said they would not provide this information until the competitions are completed. However, we will need to review solicitation, proposal, evaluation, and selection documents as they become available. For example, we will need such things as the acquisition and source selection plans, the proposals from each of the competing entities, and documents relating to the evaluation of the proposals and to the selection decision. Appendix I to this statement contains our letter to the Senate Armed Services Committee detailing our access problems. Our basic authority to access records is contained in 31 U.S.C. 716. This statute gives us a very broad right of access to agency records, including the procurement records that we are requiring here, for the purpose of conducting audits and evaluations. Moreover, the procurement integrity provision in 41 U.S.C. 423 that prohibits the disclosure of competition-sensitive information before the award of a government contract specifies at subsection (h) that it does not authorize withholding information from Congress or the Comptroller General. We have told the Air Force that we appreciate the sensitivity of agency procurement records and have established procedures for safeguarding them. As required by 31 U.S.C. 716(e)(1), we maintain the same level of confidentiality for a record as the head of the agency from which it is obtained. Further, our managers and employees, like all federal officers and employees, are precluded by 18 U.S.C. 1905 from disclosing proprietary or business-confidential information to the extent not authorized by law. Finally, we do not presume to have a role in the selection of the successful offeror. We recognize the need for Air Force officials to make their selection with minimal interference. Thus, we are prepared to discuss with the Air Force steps for safeguarding the information and facilitating the Air Force’s selection process while allowing us to meet statutory reporting responsibilities. In response to congressional concerns regarding the appropriateness of its plans to privatize-in-place the Sacramento and San Antonio maintenance depot workloads, the Air Force revised its strategy to allow the public depots to participate in public-private competitions for the workloads. In the 1998 Defense Authorization Act, Congress required us to review and report on the procedures and results of these competitions. The C-5 aircraft workload was the first such competition. We issued our required report evaluating the C-5 competition and award on January 20, 1998. After assessing the issues required under the act relating to the C-5 aircraft competition, we concluded that (1) the Air Force provided public and private offerors an equal opportunity to compete without regard to where work would be performed, (2) the procedures did not appear to deviate materially from applicable laws or the FAR; and (3) the award resulted in the lowest total cost to the government, based on Air Force assumptions and conditions at the time of award. Nonetheless, public and private offerors raised issues during and after the award regarding the fairness of the competition. First, the private sector participants noted that public and private depot competitions awarded on a fixed-price basis are inequitable because the government often pays from public funds for any cost overruns it incurs. Private sector participants also questioned the public depot’s ability to accurately control costs for the C-5 workload. In our view, the procedures used in the C-5 competition reasonably addressed the issue of public sector cost accountability. Further, private sector participants viewed the $153-million overhead cost savings credit given to Warner Robins as unrealistically high and argued that the selection did not account for, or put a dollar value on, certain identified risks or weaknesses in the respective proposals. We found that the Air Force followed its evaluation scheme in making its overhead savings adjustment to the Warner Robins proposal and that the Air Force’s treatment of risk and weaknesses represented a reasonable exercise of its discretion under the solicitation. Although the public sector source was selected to perform the C-5 workload, it questioned some aspects of the competition. Warner Robins officials stated that they were not allowed to include private sector firms as part of their proposal. Additionally, the officials questioned the Air Force requirement to use a depreciation method that resulted in a higher charge than the depreciation method private sector participants were permitted to use. Finally, they questioned a $20-million downward adjustment to its overhead cost, contending that it was erroneous and might limit the Air Force’s ability to accurately measure the depot’s cost performance. While the issues raised by the Warner Robins depot did not have an impact on the award decision, the $20-million adjustment, if finalized, may cause the depot problems meeting its cost objectives in performing the contract. The Air Force maintains that the adjustment was necessary based on its interpretation of the Warner Robins proposal. Depot officials disagree. At this time, the Air Force has not made a final determination as to how to resolve this dispute. DOD decided to issue a single solicitation combining multi-aircraft and commodity workloads at the Sacramento depot and a single solicitation for multi-engine workloads at the San Antonio depot. Under the 1998 Defense Authorization Act, DOD issued the required determinations that the workloads at these two depots “cannot as logically and economically be performed without combination by sources that are potentially qualified to submit an offer and to be awarded a contract to perform those individual workloads.” As required, we reviewed the DOD reports and supporting data and issued our report to Congress on January 20, 1998.We found that the accompanying DOD reports and supporting data do not provide adequate information supporting the determinations. First, the Air Force provided no analysis of the logic and economies associated with having the workload performed individually by potentially qualified offerors. Consequently, there was no support for the Department’s determination that the individual workloads cannot as logically and economically be performed without combination. Air Force officials stated that they were uncertain as to how they would do an analysis of performing the workloads on an individual basis. However, Air Force studies indicate that the information to make such an analysis is available. For example, in 1996 the Air Force performed six individual analyses of depot-level workloads performed by the Sacramento depot to identify industry capabilities and capacity. The workloads were hydraulics, software electrical accessories, flight instruments, A-10 aircraft, and KC-135 aircraft. As a part of the analyses, the Air Force identified sufficient numbers of qualified contractors interested in various segments of the Sacramento workload to support a conclusion that it could rely on the private sector to handle these workloads. Second, the reports and available supporting data did not adequately support DOD’s determination. For example, DOD’s determination relating to the Sacramento Air Logistics Center states that all competitors indicated throughout their workload studies that consolidating workloads offered the most logical and economical performance possibilities. This statement was based on studies performed by the offerors as part of the competition process. However, one offeror’s study states that the present competition format is not in the best interest of the government and recommends that the workload be separated into two competitive packages. On February 24, 1998, the Air Force provided additional information in support of the Department’s December 19, 1997, determination. This information included two documents: (1) a report containing the rationale for combining the San Antonio engine workloads into a single solicitation and (2) a white paper containing the rationale for combining the Sacramento aircraft and commodity workloads. These two papers supported the testimony provided by DOD before the Military Readiness Subcommittee of the House National Security Committee on February 25, 1998. During our February 24, 1998, testimony before the same subcommittee, we were asked to review the additional support provided by the Air Force. We are in the process of making that review. In this regard, we have several preliminary observations. First, the information contained within the two papers does provide supporting data for the logic and the economies of combining the workloads in the solicitations if the workloads are all to be performed at one location. While we are encouraged to see that the Air Force has provided a substantial amount of information supporting this position, we would have expected to see more analysis relating to the consideration of other feasible alternatives. Other alternatives that appear to be logical and potentially cost-effective were not considered or were considered only in a general manner. For example: (1) solicitations with alternate offer schedules permitting the competitors to offer on any combination of workloads, from one to all, were not considered; (2) transferring some of the workloads to another public depot outside the competition process, an option that was discussed in at least one offeror’s study report, was not considered; and (3) dividing the Sacramento workload into two, rather than five separate work packages, as was done for the San Antonio acquisition strategy, was given only general consideration. Second, the papers stated that managing multiple source selections would lengthen the competition process and increase costs. However, the paper did not discuss the option of having program management teams at two different locations and different source selection teams managing each of the individual competitions. Using the two-package scenario previously mentioned, may be a logical and cost-effective alternative. Also, the papers stated that some of the workloads are too small and sporadic to attract interested offerors unless this undesirable workload is combined with more attractive work. The option of transferring these workloads outside the competition process was not considered, although their inclusion in the work package may increase the cost of other competition workloads. Third, regarding cost issues, the Air Force analysis projected an increased cost from issuing separate solicitations of $55.3 million to $130.7 million at Sacramento and $92.4 million to $259.6 million at San Antonio. However, all recurring cost elements were not considered. For example, the analysis did not consider the additional layer of cost associated with subcontracting under the combined work package scenario. Since these costs could be significant and could exceed the projected savings estimated by the Air Force from using combined workloads, it is important that they be considered. Additionally, the Air Force analysis assumed that the cost of operations would be the same for each option, while the possibility of increased competition could reduce the costs for unbundled workloads. Lastly, Air Force Audit Agency officials informed us that they performed a management advisory service review of the papers. They stated that given the 2-day time frame available they did “a cursory review” of the source documents and a general assessment of the logic of the two alternatives discussed in the Air Force papers. This review assessed the logic of the two alternatives reviewed in each case, but did not include an audit of the underlying data nor a consideration of other feasible alternatives. As part of our mandated review of the solicitations and awards for the Sacramento and San Antonio engine workloads, we reviewed DOD reports to Congress in connection with the workloads, draft requests for proposals, and other competition-related information. Further, we discussed competition issues with potential public and private sector participants. These participants raised several concerns that they believe may affect the competitions. Much remains uncertain about these competitions, and we have not had the opportunity to evaluate these issues, but I will present them to the Subcommittee. The 1998 Defense Authorization Act modifies 10 U.S.C. 2466 to allow the services to use up to 50 percent of their depot maintenance and repair funds for private sector work. However, the act also provides for a new section (2460) in title 10 to establish a statutory definition of depot-level maintenance and repair work, including work done under interim and contractor logistic support arrangements and other contract depot maintenance work and requires under 10 U.S.C. 2466, that DOD report to Congress on its public and private sector workload allocations and that we review and evaluate DOD’s report. These changes, which will affect the assessment of public and private sector mix, are in effect for the fiscal year 1998 workload comparison, and DOD must submit its report to Congress for that period by February 1, 1999. Determining the current and future public-private sector mix using the revised criteria is essential before awards are made for the Sacramento and San Antonio workloads. Preliminary data indicates that using the revised criteria, about 47 to 49 percent of the Air Force’s depot maintenance workload is currently performed by the private sector. However, the Air Force is still in the process of analyzing workload data to determine how much additional workload can be contracted out without exceeding the 50 percent statutory ceiling. In December 1996, we reported that consolidating the Sacramento and San Antonio depot maintenance workloads with existing workloads in remaining Air Force depots could produce savings of as much as $182 million annually. Our estimate was based on a workload redistribution plan that would relocate 78 percent of the available depot maintenance work to Air Force depots. We recommended that DOD consider the savings potential achievable on existing workloads by transferring workload from closing depots to the remaining depots, thereby reducing overhead rates through more efficient use of the depots. The Air Force revised its planned acquisition strategy for privatizing the workloads in place and adopted competitive procedures that included incorporation of an overhead savings factor in the evaluation. During the recent C-5 workload competition evaluation, the Air Force included a $153-million overhead savings estimate for the impact that the added C-5 workload would have on reducing the cost of DOD workload already performed at the military depot’s facilities. The overhead savings adjustment, which represented estimated savings over the 7-year contract performance period, was a material factor in the decision to award the C-5 workload to Warner Robins. The private sector offerors questioned the military depot’s ability to achieve these savings. In response to private sector concerns, the Air Force is considering limiting the credit given for overhead savings in the Sacramento and San Antonio competitions. For example, in the draft Sacramento depot workload solicitation, the Air Force states that “the first year savings, if reasonable, will be allowed. The second year savings, if supportable, will be allowed but discounted for risk. For three years and beyond, the savings, may be allowed if clearly appropriate, but will be considered under the best-value analysis.” Questions have been raised about the structure of the draft solicitations. One concerns the proposed use of best-value evaluation criteria. The draft solicitations contain selection criteria that differ from those used in the recent competition for the C-5 workload. They provide that a contract will be awarded to the public or private offeror whose proposal conforms to the solicitation and is judged to represent the best value to the government under the evaluation criteria. The evaluation scheme provides that the selection will be based on an integrated assessment of the cost and technical factors, including risk assessments. Thus, the selection may not be based on lowest total evaluated cost. For the C-5 solicitation, the public offeror would receive the workload if its offer conformed with the solicitation requirements and represented the lowest total evaluated cost. The questions concern the propriety of a selection between a public or private source on a basis other than cost. Other questions concern whether multiple workloads should be packaged in a single solicitation and whether the inclusion of multiple workloads could prevent some otherwise qualified sources from competing. As noted, the solicitations are still in draft form. As required by the 1998 act, we will evaluate the solicitations once issued, in the context of the views of the relevant parties to determine whether they are in compliance with applicable laws and regulations. Mr. Chairman, we are working diligently to meet the Committee’s mandates and to safeguard sensitive Air Force information that is necessary to accomplish this work. We are prepared to discuss with the Air Force the steps that can be taken to safeguard the material and facilitate the source selection process while allowing us to carry out our statutory responsibility. However, we simply will be unable to meet our mandated reporting requirements unless we are provided timely access to this information. This concludes my prepared remarks. I will be happy to answer your questions at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed the public-private competitions for workloads at two maintenance depots identified for closure, focusing on: (1) the problems GAO is having in obtaining access to Department of Defense (DOD) information; (2) the recent competition for C-5 aircraft workload and GAO's assessment of it; (3) the adequacy of DOD's support for its determination that competing combined, rather than individual workloads of each maintenance depot is more logical and economical; and (4) concerns participants have raised about the upcoming competitions for the workloads at the air logistics centers in Sacramento, California, and San Antonio, Texas. GAO noted that: (1) its lack of access to information within DOD is seriously impairing its ability to carry out its reporting requirements; (2) GAO completed, with difficulty, its required report to Congress concerning DOD's determination to combine individual workloads at two closing logistics centers into a single solicitation at each location; (3) if DOD continues to delay and restrict GAO's access to information it needs to do its work, GAO will be unable to provide Congress timely and thorough responses regarding the competitions for the Sacramento and San Antonio depot maintenance workloads; (4) in assessing the competition for the C-5 aircraft workloads, GAO found that: (a) the Air Force provided public and private sources an equal opportunity to compete for the workloads without regard to where the work could be done; (b) the Air Force's procedures for competing the workloads did not appear to deviate materially from applicable laws or the Federal Acquisition Regulation; and (c) the award resulted in the lowest total cost to the government, based on Air Force assumptions at the time; (5) for the remaining workloads at Sacramento and San Antonio, DOD reports and other data do not support the Defense Secretary's determination that using a single contract with combined workloads is more cost-effective than using separate contracts for individual workloads; (6) much remains uncertain about the upcoming competitions for the Sacramento and San Antonio depot maintenance workloads; (7) potential participants have raised several concerns that they believe may affect the conduct of the competitions; (8) one concern is the impact of the statutory limit on the amount of depot maintenance work that can be done by non-DOD personnel; (9) the Air Force has not yet determined the current and projected public-private sector workload mix using criteria provided in the 1998 Defense Authorization Act, but is working on it; (10) nonetheless, preliminary data indicates there is little opportunity to contract out additional depot maintenance workloads to the private sector; (11) another concern is the Air Force's proposed change in the overhead savings the Department may factor into the cost evaluations; (12) for the C-5 workload competition, overhead savings were considered for the duration of the performance period; and (13) however, for the Sacramento and San Antonio competitions, the Air Force is considering limiting overhead savings to the first year and possibly reducing the savings for the second year.
|
Federal agencies need effective human capital systems to support enhanced performance, ensure accountability, and help achieve their missions. Through our past work, we have identified five key components of human capital systems. Strategic workforce planning: The steps an agency takes to (1) align its human capital program with its current and emerging mission and programmatic goals and (2) develop long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. Training: Developing a strategic approach to establish training priorities and leveraging investment in training to achieve agency results; identifying specific training initiatives that improve individual and agency performance; ensuring effective and efficient delivery of training opportunities in an environment that supports learning and change; and demonstrating how training efforts contribute to improved performance and results. Recruitment and hiring: Developing and implementing strategies to advertise positions and attract top candidates; assessing applicants’ relative competencies or knowledge, skills, and abilities against job- related criteria to identify the most qualified candidates; using a variety of candidate assessment tools, such as interviews, to make a selection; and coordinating the process of bringing a new hire on board. Performance management: Planning work and setting individual performance expectations, monitoring performance throughout the year through ongoing feedback, developing individuals’ capacities to perform, and rating and rewarding individual performance. Diversity management: A process intended to create and maintain a positive work environment where the similarities and differences of individuals are valued, so that all can reach their potential and maximize their contributions to an organization’s strategic goals and objectives. An agency’s human resources organization, the agency’s leaders, and staff have roles to play in implementing these components. For example, the human resources organization, with the participation of agency management, would develop performance management policies and procedures, but the agency’s leaders implement those policies and procedures. Additionally the agency’s leaders identify a requirement to recruit and the skill set needed to fill a vacancy, but the human resource organization performs the mechanics of posting the vacancy and ensuring that agency procedures are followed in reviewing applications and making the selection. The agency’s staff may be involved in vetting new system proposals or participating in process improvement teams. In FAA, the Office of Human Resource Management has the primary responsibility for developing and implementing the agency’s human capital system. However, senior officials in each of FAA’s four major entities, called lines of business, oversee the implementation of human capital procedures within their respective organizations. In September 1993, the National Performance Review concluded that federal budget, procurement, and personnel rules prevented FAA from reacting quickly to the needs of the air traffic control system. In response to a congressional mandate resulting from this review, the Secretary of Transportation prepared a report in 1995 that concluded that the most effective reform would be to exempt FAA from most federal personnel rules and procedures contained in title 5 of the United States Code. On November 15, 1995, Congress exempted FAA from a number of title 5 rules and procedures (see table 1) and directed the FAA Administrator to develop and implement a new personnel management system. The law also required that FAA’s new personnel management system address the unique demands of the agency’s workforce and, at a minimum, provide greater flexibility in the compensation, hiring, training, and location of personnel. Subsequent legislation added the requirement for FAA to negotiate with its labor unions any changes made to its personnel management system. On April 1, 1996, FAA introduced its new personnel management system and, over the next several years, initiated a number of efforts to address the following reform objectives: FAA acquires, develops, and deploys required expertise (people) where and when needed. Human resource systems support employees’ achievement of goals. FAA has effective leadership and management. FAA is perceived as a desirable place to work. Human resource management systems are efficient and adaptable. In 2003, we reported that FAA had implemented many of its reform initiatives, but changes to its compensation system, workforce planning initiatives, and implementation of a new performance management system remained incomplete. FAA had (1) developed a new broadbanded, performance-based pay structure aimed at providing a wider range of pay and greater managerial flexibility to recruit, retain, and motivate employees and (2) implemented the pay structure in varying forms for about 75 percent of its workforce. FAA had planned to cover the remainder of the workforce under the new compensation plan as it negotiated new labor agreements with unions representing those workforces. As of July 2009, 88 percent of FAA’s workforce was under one or more components of performance-based compensation. However, FAA and the National Air Traffic Controllers Association (NATCA)—-the union representing FAA’s air traffic controllers—agreed to a new contract in September 2009 that removed nearly 16,000 employees from performance- based compensation, thereby reducing the percentage of the workforce under performance-based compensation to about 55 percent. As part of its reform efforts, FAA also developed a new performance management system consisting of a narrative evaluation of employees’ performance against performance standards combined with feedback and coaching. FAA began developing its new performance management system in 1999 and first implemented it within segments of the agency in 2001. At the time of our 2003 report, about 20 percent of FAA’s workforce was under the new performance management system. As of July 2009, FAA had expanded the performance management system’s coverage to more than 95 percent of its workforce. Under FAA’s new contract with NATCA, air traffic controllers continue to be covered by the new performance management system. While the contract requires that controllers’ performance be evaluated against written standards, and that the supervisor and controller discuss the evaluation, the contract does not provide for interim or midpoint performance feedback during the performance appraisal cycle. FAA also sought, received, and implemented flexibilities in hiring, relocating, and training employees. However, at the time of our 2003 report, FAA had not completed efforts to incorporate these flexibilities into strategic workforce plans. Workforce planning should include developing strategies for integrating hiring, recruiting, training, and other human capital activities in a manner that meets the agency’s long-term objectives to ensure that appropriately skilled employees are available when and where they are needed to meet an agency’s mission. We reported that FAA had established agencywide corporate policies and guidance for developing workforce plans for executives, managers, and supervisors and for specific occupations, but these initiatives were at varying degrees of implementation. As we discuss later in this report, FAA has developed workforce plans for its major organizational segments and for specific workforces. In 2003, we also noted that FAA lacked baseline data against which to measure the impact of its initiatives, and that FAA had not adequately linked initiative goals to the agency’s mission goals. While FAA’s human resource officials maintained that the new compensation system addressed recruitment and retention objectives, FAA could not support this assertion with data because the agency did not establish any baseline data from which to measure improvements. Consequently, we were unable to evaluate the effectiveness of FAA’s reform efforts. We recommended that FAA develop empirical data and establish performance measures, develop linkages between human capital reform initiatives and program goals, establish time frames for data collection and analysis, and hold managers accountable for the results of human capital management efforts. FAA has implemented these recommendations. In the coming years, FAA risks losing significant amounts of institutional knowledge as its employees retire. The dramatic shifts that are occurring in the nation’s population are reflected in FAA’s workforce, where more than three-fourths of its employees are age 40 or older. By fiscal year 2013, FAA projects that 38 percent of its employees who perform work that is critical to FAA’s mission will be eligible to retire (see table 2). For example, in the next 5 years, 42 percent of air traffic controllers, who ensure the safe and smooth movement of air traffic in the air and on the ground at airports; 31 percent of airway transportation system specialists, who install and maintain air traffic control systems; and 48 percent of aviation safety inspectors, who perform critical safety oversight of the aviation industry and air operators, are projected to be eligible to retire. However, the current economic downturn could affect FAA’s retention and recruitment. FAA’s losses from retirement and other causes, for all occupations, declined by 27 percent during the first 4 months of fiscal year 2009, compared with the corresponding months of the prior fiscal year. Additionally, a July 2009 vacancy announcement for controllers attracted nearly 9,000 applications and a recruitment fair for people with disabilities was successful. In addition to replacing institutional knowledge, FAA will need to obtain expertise to support the transition of the national airspace system to NextGen. NextGen envisions a fundamental change in air traffic management by moving to a structure that is adaptable to growth in operations as well as shifts in demand, and that is network-centric, meaning everyone using the system has easy access to the same information at the same time. NextGen will entail a number of transitions—from ground-based to satellite-based navigation and surveillance; from voice communications to digital data exchange; from a fragmented weather forecast delivery system to a system that uses a single, authoritative source; and from limited operations in poor visibility conditions to more operations that can proceed in adverse weather. While FAA plays a central role in these transitions, the Departments of Defense, Commerce, and Homeland Security; the National Aeronautics and Space Administration; and the White House Office of Science and Technology Policy also participate. In previous work, we questioned whether FAA had the required systems integration expertise, given the complexity of this effort. FAA responded by engaging the National Academy of Public Administration (NAPA) to identify (1) the skills needed to accomplish the transition to NextGen and (2) strategies for acquiring the necessary workforce competencies. NAPA assembled a panel of experts (NAPA Panel) to perform the study and reported on the panel’s findings in September 2008. The NAPA Panel found that acquisitions will be an important element in NextGen. FAA defines its acquisition workforce broadly, following the Office of Management and Budget’s guidance, to include program and project managers, researchers and engineers, business and financial analysts, contracting officers and specialists, contracting officer’s technical representatives, integrated logistics specialists, test and evaluation personnel, and other specialized support. Consequently, the acquisition workforce includes individuals who determine requirements; plan acquisition strategies; establish business relationships to obtain needed goods and services; and ensure that the government’s needs are met by testing, evaluating, and monitoring contractor performance. FAA’s acquisition workforce also includes experts in subject matter areas, such as finance, who support the business process. FAA recently estimated that it will need to hire several hundred additional staff to implement NextGen, and most of these staff will be part of the acquisition workforce. However, as FAA attempts to hire these staff, it will be competing with a projected governmentwide hiring surge. For example, the Partnership for Public Service’s survey, “Where the Jobs Are 2009,” indicates that from fiscal years 2010 through 2012, the federal government could hire more than 16,000 employees to fill needs in general administration/program management; engineering (electrical, aerospace, and computer); finance; and contracts. FAA is developing a 5-year acquisition workforce plan that the agency expects will address, among other things, challenges and strategies to address workforce requirements. FAA expects to complete the plan in October 2009. FAA’s human capital system employs many leading practices in strategic workforce planning, training, recruitment and hiring, and performance management. FAA has implemented few leading diversity management practices, but is developing a diversity outreach plan. The agency’s consistent placement near the bottom in published best places to work rankings could pose challenges in recruiting, motivating, and retaining employees to replace those retiring and to meet current and future mission requirements. FAA’s human capital system mirrors many leading human capital practices in strategic workforce planning, training, recruitment and hiring, and performance management, but FAA officials with responsibility for implementing human capital procedures in major segments of the agency and union representatives criticized FAA’s practices in performance management. Union representatives also criticized FAA’s training practices. Pending legislation to reauthorize FAA contains provisions related to a number of these practices. This practice focuses on developing long-term strategies for acquiring, developing, and retaining an organization’s total workforce to meet the needs of the future. Such planning is essential to ensure that an agency’s human capital program uses its workforce’s strengths and addresses related challenges in a manner that is clearly linked to achieving missions and goals. Table 3 lists the key leading strategic workforce management practices that we identified in our past work and examples of FAA’s activities. See appendix II for an expanded list of FAA’s activities that align with key leading practices. FAA has made progress in strategic workforce planning since 2003, when we issued our previous report. FAA has established an agencywide strategic human capital plan that lays out human capital goals, and strategies and initiatives under each goal. FAA annually updates its strategies and initiatives to maintain alignment with the strategic plan— the Flight Plan—and to reflect emerging or changing mission demands. FAA has also established workforce plans for each of its four major entities, called lines of business—the Air Traffic Organization (ATO), Aviation Safety, Airports, and Commercial Space Transportation—and has established plans to manage specific workforces, such as air traffic controllers, and technical operations. The House has passed and the Senate Committee on Commerce, Science, and Transportation has reported out FAA reauthorization bills that would require the agency to undertake additional workforce planning. For example, the bills would require FAA to (1) study and prepare a report to Congress on the frontline manager staffing requirements at air traffic control facilities, taking into account, among other things, the facility type and complexity of air traffic handled and managerial responsibilities, and (2) develop a staffing model for safety inspectors. Additionally, the bills would require FAA to make the appropriate arrangements with the National Academies of Science for studies of the assumptions and methods used to determine the staffing needs for controllers and airway transportation systems specialists. Training and development programs can assist the agency in achieving its mission and goals by improving individual and, ultimately, organizational performance. In our past work, we identified numerous leading training practices, which we have segmented into the four key components shown in table 4, along with examples of FAA’s activities that align with these key components. See appendix II for an expanded list of FAA’s activities that align with key leading practices. In 2005, we reviewed FAA’s technical training for aviation safety inspectors and found that the program followed some leading practices. For example, the training efforts are intended to support FAA’s goals for improving aviation safety, and FAA has established clear accountability for ensuring that inspectors have access to technical training. Additionally, the technical training program contains an evaluation component. However, we made several recommendations aimed at, among other things, improving the timeliness of training, improving FAA’s identification of gaps in inspectors’ technical knowledge, and developing measures of the impact of training on achieving organizational goals. FAA has implemented most of these recommendations. For example, FAA has taken steps to deliver training in a more timely manner, such as developing new Web-based courses that inspectors can complete when the training is needed. Additionally, in 2007, FAA conducted a feasibility analysis that explored different methods to measure training’s impact on achieving organizational goals. In August 2009, FAA was working to address the remaining recommendations by providing more guidance on when accepting training for in-kind services is appropriate and revising inspector guidance to clarify that free training does not preclude FAA from fulfilling its oversight and enforcement role. The Senate’s FAA reauthorization bill would require FAA to report on the training provided to safety inspectors. Union representatives provided comments regarding how FAA designs and implements training. The President of the Professional Aviation Safety Specialists (PASS)—the union representing, among others, FAA employees who maintain air traffic control equipment—said that the technical training for airway systems specialists is often unnecessarily time-consuming and costly, because FAA customizes the training to make it unique to FAA and then requires that employees travel to the FAA Academy to receive technical training. The union President said that FAA could streamline the process by allowing staff to take training courses from private vendors who offer the same training at locations closer to their home office. FAA noted that the academy is the only location that replicates every piece of equipment that aviation systems safety specialists use, and that equipment in air traffic control facilities cannot be taken off- line for training. However, FAA’s Director of Technical Training and Development told us that FAA is exploring how to validate locally provided courses for equivalency to the training provided at the FAA Academy. Additionally, FAA is increasing the availability of Web-based training, simulations, and other alternative training methodologies that will allow greater opportunities for on-site training, according to another FAA training official. We are currently reviewing airway transportation system specialist training under another engagement. NATCA representatives criticized how FAA implements controller training. In their opinion, FAA relies too much on memorandums and other impersonal methods, rather than in-person training that would allow staff to directly pose questions to instructors and engage in open discussion on topics that require clarification. FAA’s Director of Technical Training and Development believes this criticism to be an overstatement, and noted that FAA offers many specialized courses at the FAA Academy and fills the spaces allocated. NATCA representatives also believe that controllers have fewer opportunities to attend training because there are not enough experienced controllers to handle air traffic while others attend training. A 2008 report from the Department of Transportation Inspector General questioned whether FAA had sufficient numbers of experienced controllers to train the large numbers of new controllers that it was hiring. The report noted that the hiring process was outpacing the capabilities of many air traffic facilities to efficiently process and train new hires. The Inspector General made a number of recommendations, and FAA agreed or partially agreed with most of them. FAA notes that the agency continues to monitor its hiring and training programs to achieve a balance between a facility’s trainees and the facility’s training capacity, and to help ensure that trainees progress through each stage of training while ensuring safety. The House and Senate FAA reauthorization bills would require that FAA study the adequacy of training provided to air traffic controllers. In a highly competitive job market, having an effective hiring process can help an agency compete for talented people who have the requisite knowledge and up-to-date skills to accomplish missions and achieve goals. Table 5 lists the key recruitment and hiring practices that we have identified in our past work and provides examples of FAA’s activities. See appendix II for an expanded list of FAA’s activities that align with key leading practices. In recent years, FAA has used its flexibilities mostly to hire, relocate, and retain air traffic controllers. However, FAA also has used flexibilities to hire expertise in other fields, such as program management, aerospace engineering, and environmental protection. The House and Senate FAA reauthorization bills would require FAA to increase the number of safety- related positions, such as safety inspectors, commensurate with available funding. An effective performance management system can be a strategic tool to drive internal change and achieve desired results. Our work has identified numerous leading practices in performance management, which we summarize in table 6. The table also provides examples of FAA’s activities that align with these key leading practices. See appendix II for an expanded list of FAA’s activities that align with key leading practices. FAA officials responsible for implementing human capital procedures within the lines of business, as well as representatives of several unions, criticized components of FAA’s performance management system. For example, because FAA’s system calls for a performance rating of either “meets expectations” or “does not meet expectations,” an official from FAA’s largest line of business and union officials characterized the system as “pass/fail.” Because nearly everyone is rated “meets expectations,” they believe that the system does not make distinctions above the “meets expectations” level. For employees who have met expectations, FAA further distinguishes performance by distributing performance-based “superior contribution increases” (SCI). FAA distributes the SCI based on supervisors’ summaries of employee performance that are based on (1) personal observations of employees’ performance and contributions and (2) the employees’ self- assessments. The supervisor’s summary focuses on three areas: collaboration, customer service, and impact on organizational success. A fourth area, management and leadership, applies only to managers. Up to 65 percent of employees who meet expectations can receive either a 1.8 percent or 0.6 percent SCI. FAA’s practice of awarding pay increases based on employee contributions aligns with its reform objective that human resource systems support employees’ achievement of organizational goals, but FAA officials with responsibility for implementing human capital procedures in the ATO, Aviation Safety, and Airports lines of business, as well as union representatives, expressed fairness concerns about the impacts of these increases, similar to the concerns on which we reported in 2003. The FAA officials believe that the SCI is not large enough to motivate performance, and because more than one-half of the employees receive the increase, it creates morale problems among those employees whom officials believe are solid performers, but received no SCI. Union representatives said that, from the employee’s perspective, it is not clear how supervisory recommendations are translated into decisions about which employees receive these increases. Union representatives also expressed fairness concerns in saying that, in some cases, the same employees receive the increases every year and that, in other cases, increases appear to be given on a rotating basis, without respect to level of performance. All employees who meet expectations also receive an Organizational Success Increase (OSI), which FAA provides when the agency has met at he least 90 percent of its performance targets. The total funding pool for t OSI consists of the amount of the governmentwide General Schedule increase for a given year, plus an additional 1 percent. As with t OSI aligns with the reform objective of supporting employees’ achievement of organizational goals. However, FAA and union officials criticized the OSI component of performance pay because, in their opinion, it rewards or penalizes employees for organizational performanc that they cannot influence. For example, FAA may withhold a portion of the OSI because the agency did not achieve its target for reducing operational errors, even though there are many employees, such as contract specialists, whom FAA officials and union representatives believe have no influence over such activities. FAA has taken actions to improve the implementation of its performance management system. For example, in 2008, FAA issued a memorandum to senior leadership and managers emphasizing policy requirements for, and reiterating the importance of, midcycle progress reviews. Additionall FAA organization is developing guidance for managers on providing feedback and, according to FAA officials, managers receive automatic notifications that midpoint feedback and end-of-cycle reviews are due to help ensure that these activities take place. FAA also recently initiated a series of briefings for employees and managers regarding the performa management and pay for performance systems. Additionally, FAA is assessing the percentage of employees who receive midterm feedback to identify whether corrective action is needed. Recently, FAA developed an action plan aimed, in part, at creating a performance culture within the agency. The plan contains steps to improve managers’ use of the performance management system. We discuss this action plan in more detail later in this report. Diversity management is a process intended to create and maintain a positive work environment where the similarities and differences of individuals are valued, so that all can reach their potential and maximiz their contributions to an organization’s strategic goals. The concept of managing diversity focuses on inclusion, which involves engaging the talents, beliefs, backgrounds, and capabilities of individuals and groups working toward common goals and therefore serves as a complement to equal employment opportunity (EEO). Implementing effective diversity e management helps an organization foster a work environment in which people are enabled and motivated to contribute to mission accomplishment and provide both accountability and fairness for all employees. Through our past work, we identified the following key practices that experts agree are leading diversity management practices. Develop a diversity strategy and p lan that are developed and aligned with e organization’s strategic plan. ram that attracts a supply of qualified, diverse pplicants for employment. ployee involvement to drive diversity throughout the rganization. Ensure that top leadership provides a vision of diversity that it emonstrates and communicates throughout an organization. Conduct diversit bout diversity. Establish a set of quantitative and qualitative me arious aspects of an overall diversity program. asures of the impact of Establish a means to ensure that leaders are held accountable for d by linking their performance ass p rogress of diversity initiatives. essment and compensation to the Include in succession planning an ongoing, strategic process for identifying and developin potential future leaders. g a diverse pool of talent for an organization’s Link diversity to performance by understanding that a more diverse tivity and help inclusive work environment can yield greater produc im prove individual and organizational performance. FAA’s ranking near the bottom in Best Places to Work in the Federal Government in 2007 and 2009, as published by the Partnership for Public Service (the Partnership) and American University’s Institute for the Study of Public Policy Implementation (ISPPI), could present a barrier to recruiting, motivating, and retaining the talented employees that FAA needs to meet future mission requirements. The Partnership and ISPPI develop their ranking on the basis of analysis of OPM’s biannual Federal Human Capital Survey results. The survey contains over 80 items that gauge employee satisfaction with pay, leadership, and collaboration, among other things. The Partnership and ISPPI ranked FAA 204th out of 222 agencies in 2007 and 214th out of 216 agencies in 2009. These published rankings are important to FAA because an agency’s reputation is a key factor in recruiting and hiring applicants. A recent Partnership report noted that a good reputation is the most frequently mentioned factor in choosing potential employers, and agencies with high satisfaction and engagement scores were seen as desirable by college graduates seeking employment. The Partnership report also noted that college students are rating some government agencies as ideal employers. Similarly, the Merit Systems Protection Board (MSPB) reported that employees’ willingness to recommend the federal government or their agency as a place to work can directly affect an agency’s recruitment efforts, the quality of the resulting applicant pool, and the acceptance of employment offers. In 2005, compared with 1989, MSPB found that considerably more federal employees would recommend the government as a place to work, and that more federal employees reported satisfaction with their pay. Moreover, the job security that federal employment offers is a major selling point in the current economic downturn. While FAA generally follows leading recruitment and hiring practices, FAA may be able to take only limited advantage of these favorable trends, since only about one-half of FAA employees’ responses to the OPM survey item, “I would recommend my organization as a good place to work,” were positive in 2008. About 65 percent of the employees in the rest of the federal government responded positively, putting FAA about 15 percentage points behind other agencies. Moreover, MSPB noted that prospective employees would rather work for an agency billed as one of the best places to work as opposed to an agency at the bottom of the list. Clearly, when Congress passed legislation allowing FAA to implement a new personnel management system, FAA recognized the importance of this point by establishing the reform objective that FAA be perceived as a desirable place to work. However, FAA’s low rankings in 2007 and 2009 would not indicate to prospective applicants that FAA is perceived as a desirable place to work. FAA employee responses to OPM’s 2008 Federal Human Capital Survey placed the agency well behind the rest of the federal government in overall job and organizational satisfaction, as well as satisfaction with their leaders and their leaders’ competencies in communications and building teamwork and cooperation. FAA is taking steps that could improve employee satisfaction, but has not established accountability for improvements. Compared with employees in the rest of the federal government, FAA employees indicated less satisfaction with key items in OPM’s 2008 Federal Human Capital Survey. FAA employees provided 59 percent positive responses regarding overall job satisfaction—9 percentage points lower than employees in the rest of the federal government, and 41 percent positive responses regarding overall satisfaction with their organization—17 percentage points lower than employees in the rest of the federal government. Moreover, FAA employees were less positive concerning many of the items that OPM has identified as indicators of an agency’s ability to recruit, motivate, and retain employees. On the basis of its analysis of its past two Federal Human Capital Surveys, OPM determined that responses to 16 items—called “impact items”—really make a difference in whether people want to come, stay, and contribute their fullest to an agency. FAA’s percentage of positive responses to these impact items and the difference between FAA’s percentage of positive responses and that of the rest of the federal government appear in figure 1. FAA responses to the top two items in figure 1 indicate that FAA employees said they like the work they do and derive considerable satisfaction from it; and FAA employees’ satisfaction on these topics was close to that of employees in the rest of the federal government. However, for the remainder of the items, FAA employees expressed less satisfaction than employees in the rest of the federal government. These lower levels of workplace satisfaction represent potential hurdles when competing for talent with other federal agencies. FAA’s strained labor-management relations could be contributing to the low percentages of positive responses. Several bargaining units have had contract negotiations stretch over many years with no settlement. For example, nearly 4,000 employees, represented by PASS, remain under the provisions of contracts that date back to 1988 and 1993, while new contracts are under negotiations. Additionally, in 2006, FAA encountered difficulties in negotiating a new labor contract with NATCA, which represents about one-third of FAA employees. In May 2009, FAA and NATCA began mediated bargaining to reach agreement on a new contract. In September 2009, FAA and NATCA signed a new 3-year contract. FAA and NATCA reached agreement on most contract items, but required binding decisions from the mediators for some items, including compensation. FAA views the new contract as a framework for helping to meet the challenges of implementing NextGen. While improvement in any of the impact items that OPM identified could help FAA improve its attractiveness as an employer of choice, the items for which FAA is farthest behind the rest of the federal government provide a focus for FAA to target its improvement efforts. Those items revolve around employee perceptions of their leaders and the leadership competencies of communication and building teamwork and cooperation. Research has shown that employees who are led by strong leaders are more satisfied, engaged, and loyal than employees with weak leaders. FAA employees provided 28 percent positive responses regarding satisfaction with the policies and practices of senior leaders, and 33 percent positive responses regarding having a high level of respect for senior leaders. These two items were the farthest behind the positive responses to impact items from the rest of the federal government. Additionally, positive responses to “How good a job do you feel is being done by your immediate supervisor or team leader?” were 10 percentage points behind the positive responses from the rest of the federal government. Consistent with FAA’s reform objective, FAA has identified 16 leadership competencies grouped under 4 dimensions (see fig. 2). FAA’s assessed the competency levels of its leaders in 2005, 2007, 2008, and 2009. The results showed that the communications competency approached or slightly exceeded the agency target level each year. However, as figure 1 shows, employee perceptions of communications from managers remained 12 percentage points behind the rest of the federal government, suggesting that further work remains. The senior executives who assisted the NAPA Panel also perceived a need for FAA’s leaders to improve communications about NextGen. They pointed out that FAA’s leaders will need to better communicate a clear vision for NextGen, better define what it is, and get support and buy-in from staff at all levels of the organization. FAA officials believe that the NextGen Implementation Plan, issued in January 2009, addresses these concerns. In developing the plan, FAA’s objective was to allow a broad audience to gain a common understanding of NextGen. The plan provides technical information in its appendixes, which officials said address the needs of specific stakeholders. The way leaders communicate with their employees can impact employees’ perceptions of leaders’ honesty and integrity, which, in turn, can affect the level of employees’ respect for their senior leaders—another impact item among those for which FAA’s responses were farthest behind the rest of the federal government. The president of a local union of the American Federation of State, County, and Municipal Employees (AFSCME) provided an example that he believes demonstrates the relationship between these competencies. The union representative said that FAA provided little information to employees between 2003 and 2004 as FAA planned for and implemented a sweeping reorganization that created the 36,000 employee ATO. After the reorganization, FAA management characterized ATO as a model organization, but after the first Chief Operating Officer departed, FAA reorganized the responsibilities of ATO’s Vice Presidents. The representative believes that FAA’s lack of communications regarding these reorganizations likely contributed to a decline in trust of FAA management. FAA employees also provided fewer positive responses for impact items related to the supervisory competency, building teamwork and cooperation. The survey items related to teamwork and cooperation are “How satisfied are you with your involvement in decisions that affect your work?” and “Employees have a feeling of personal empowerment with respect to work processes.” We and others have reported on this as an area of governmentwide concern under topics such as empowerment, collaboration, teamwork, and employee engagement. For example, we have reported that empowering employees plays a crucial role in establishing a results-oriented culture. Additionally, as we previously noted in this report, a leading diversity management practice is to understand that a more diverse and inclusive workforce can yield greater productivity, and that diversity training could provide an awareness of how diverse perspectives can improve organizational performance. Moreover, MSPB has concluded that further engaging the federal workforce is critical as agencies attempt to improve their operations within budget constraints, and as they face increasing numbers of retirement-eligible employees in a labor market where there is intense competition for top talent. MSPB found that engaged employees have less intention to leave their current agency, use less sick leave, and workf in agencies that produce better programmatic results. For the purposes o this report, we use collaboration to include teamwork, cooperation, employee engagement, and employee empowerment. Improving collaboration has implications for FAA’s successful implementation of NextGen, because 98 percent of FAA’s employees who are eligible to be members of a bargaining unit are represented by a union. The NAPA Panel noted that although FAA’s labor-management relations had been strained for years, FAA had no clear strategy to engage the unions. In the past, FAA’s failure to collaborate with the ultimate users early in a system’s design contributed to cost growth and schedule delays. The panel concluded that FAA’s success in leading the transition to NextGen will depend, in part, on its willingness to review its past efforts and learn from challenges and mistakes. Although controllers will be end users of NextGen’s technology, NATCA representatives told us they perceive that FAA management has little interest in collaboration. NATCA testified to Congress (1) that NextGen will only be successful if it is done with complete participation and agreement from government, labor, and industry groups and (2) that collaboration will help FAA to identify and address potential issues early on in the process, thereby saving time, money, and resources and avoiding safety risks. In FAA’s view, the agency has always desired to include end users in developing NextGen technology and procedures. According to FAA, a dispute arose over whether FAA or NATCA would make the final determination concerning the specific union members who would serve as subject matter experts. At the present time, FAA is using controllers as subject matter experts in testing and developing new technology and procedures, but NATCA has not endorsed their participation, according to a senior FAA official. Other union representatives discussed collaboration in more general terms. A PASS representative described a culture in which employees speak only when asked, and said that speaking up and suggesting solutions to problems is not encouraged. The representative also said that many supervisors feel frustrated because they would like to change this culture, but FAA does not provide any encouragement to do so. A representative of the American Federation of Government Employees said the union had been optimistic years ago about a 1996 contract that had an emphasis on partnership, but nothing materialized in subsequent years. The following section of this report describes several actions that FAA is taking to improve elements of workplace satisfaction, including collaboration. Additionally, the Senate FAA reauthorization bill contains provisions aimed at increasing collaboration with employee groups. For example, the bill would require FAA to establish a process for collaborating with employees, who are selected by the bargaining unit, in planning and developing projects that would affect them. FAA is taking actions that could improve employee satisfaction with their leaders over time. In 2007, FAA established its Senior Leadership Development Program to provide a pipeline of senior managers qualified to fill executive-level vacancies. To identify emerging leaders among its nonsupervisory employees, FAA initiated the Program for Emerging Leaders in 2009. FAA evaluates applicants for these programs on the basis of their demonstration of the leadership competencies shown in figure 2. Over time, as more of FAA’s leaders graduate from these programs, perspectives of employees about their leaders could change for the better. Evaluating the outcome of FAA’s leadership development programs will help FAA assess its progress in meeting its reform objective to improve leadership and management. To evaluate the outcome of its leadership development programs, FAA collects evaluations of participants’ satisfaction with the training and the classroom experience and obtains progress reviews that focus on the demonstrated achievement of developmental objectives. FAA is also measuring overall programmatic outcomes tied to succession planning goals, such as the ratio of candidates in development to projected vacancies, number of graduates appearing on best-qualified lists, and graduates placed in leadership positions. However, because FAA has so far graduated 1 cohort of 16 participants from its leadership development programs, substantial data are not yet available. To measure the impact of these programs on leadership within the agency, FAA is beginning to track movement in the results of its periodic leadership competency assessments. FAA also has recently taken some steps to create a more collaborative climate. For example, in February 2009, FAA invited NATCA to collaborate on the implementation phase of the En Route Automation Modernization system—a key component of the NextGen transition. According to NATCA, FAA and the union have held several constructive negotiation sessions on the system’s implementation. In another example, FAA agreed, in collaboration with NATCA, to provide immunity from discipline for employees who report safety issues under certain conditions, through program implementation by a joint union-management committee. FAA credits this agreement for allowing significant strides toward a safety management culture, with the support of organized labor. FAA also used a collaborative approach to reach agreement in 2008 on a contract covering headquarters employees represented by AFSCME. This was the first successful contract negotiation since the units were first certified in 1999. A representative of PASS also noted some positive initiatives. He said the union recently attended a few meetings regarding development of ATO’s 5-year strategic plan, suggesting that FAA management is starting to embrace the idea that collaboration with union groups results in a better product. Additionally, FAA’s new Administrator is emphasizing employee engagement. FAA has established an Employee Engagement Steering Committee comprised of FAA executives. According to FAA, these executives will engage, listen to, and act upon the issues, ideas, suggestions, and recommendations made by employees to improve the FAA work environment, management practices, and organizational culture, with the goal of becoming one of the best places in government to work. Moreover, the Administrator has expressed his commitment to employee engagement in a speech to one of the agency’s largest unions and sent an e-mail to all FAA employees describing a number of initiatives aimed at hearing what employees are thinking. According to FAA Human Resource Management officials, the Administrator also plans to provide monetary awards to individuals and teams that make significant contributions to improving workplace satisfaction. FAA’s actions, as we have previously described, align with governmentwide initiatives to improve workplace satisfaction. OPM, in collaboration with the Office of Management and Budget, has requested agencies to develop action plans to increase employee satisfaction in areas where the Federal Human Capital Survey indicated low satisfaction, and to include their plans as part of their fiscal year 2011 budget submissions. FAA has developed a Federal Human Capital Survey 2009-2010 Action Plan, which it intends to include with its 2011 budget submission. FAA’s action plan focuses on improving FAA’s positive response rates to selected survey items related to leadership and creating a performance culture. Some of these items are among those that OPM has identified as affecting an agency’s recruitment, motivation, and retention. FAA has included the activities envisioned for the Employee Engagement Steering Committee, as we have previously described, as part of its plan to improve leadership. These activities could help increase positive responses regarding employee empowerment and employees’ involvement in decisions that affect their work. FAA’s positive responses for these items were, respectively, 11 and 13 percentage points behind the rest of the federal government in the 2008 Federal Human Capital Survey. Additionally, the section of the plan focused on creating a performance culture contains actions aimed at improving the performance management system’s effectiveness, which FAA officials and union representatives criticized. For example, the plan includes providing training on effectively applying the performance management system, with the expectation of achieving a better understanding of how to use the system so that employees and managers establish clear performance expectations as a basis for ongoing performance feedback and coaching. The plan also includes improving FAA’s policies and evaluation process to better ensure that the performance management system is used effectively. These efforts represent a good start. MSPB noted that among the steps that agencies can take to improve collaboration is recruiting supervisors on the basis of their supervisory abilities, something that FAA has already begun to do. FAA is also striving to create a line of sight between an employee’s work and the goals of the agency, which is another step toward increased employee engagement, according to MSPB. Also, MSPB encourages agencies to ensure a good person-to-job fit. Based on survey responses, FAA appears to have achieved some success in this area. As figure 1 shows, the impact items “I like the kind of work I do,” “My work gives me a feeling of personal accomplishment,” and “My talents are used well in the workplace” received the three highest percentages of positive responses from FAA employees and were the closest to the percentage of positive responses from employees in the rest of the federal government. However, FAA has not established accountability for the plan’s success. Although the action plan sets a goal of a 7 percent improvement in positive response rates to the eight selected survey items, FAA has not made successful achievement of this goal a performance expectation for managers, according to FAA Human Resource Management officials. Because strong leadership is key to creating a performance culture, adding an expectation for improvement in survey responses could increase chances for success. Disclosing the plan and its actions, goals, and outcomes in publicly available reports to Congress, such as the annual performance and accountability report, would also help to ensure accountability. FAA has made good progress in developing a human capital system that addresses many of its reform objectives and has structures and processes that exhibit leading practices. FAA’s efforts to increase its workforce diversity by developing an expanded applicant pool are important, but they do not embrace the full range of leading diversity management practices. FAA could take steps to move beyond recruitment activities and toward diversity management by incorporating leading practices in future updates of its congressionally directed plans to increase diversity in the controller and aviation safety workforces. FAA employees’ dissatisfaction with their workplace, expressed in their responses to OPM’s Federal Human Capital Survey, suggests that FAA has not achieved the reform objective to be perceived as a desirable place to work, and could hinder FAA’s effort to recruit, motivate, and retain the workforce it needs for current and future missions. FAA’s Federal Human Capital Survey Action Plan represents a positive step to reach beyond the structures and processes of its human capital system and address the underlying causes of employee dissatisfaction with their workplace. Establishing accountability within FAA, and externally to Congress and the American people, represents the next step to increase the probability of the plan’s success. To ensure that FAA can hire, motivate, and retain the talented staff it needs to operate the national airspace system and implement the transition to NextGen, we recommend that the Secretary of Transportation direct the FAA Administrator to take the following three actions: 1. ensure that key leading practices in diversity management are incorporated in future updates of FAA’s plans to increase diversity in the controller and aviation safety workforces; 2. hold its managers accountable for the outcomes of the Federal Human Capital Survey Action Plan by establishing a performance expectation that FAA managers will achieve the plan’s stated increases in positive responses to designated survey items; and 3. hold the agency accountable to Congress and the American people by disclosing the plan, actions, goals, and outcomes in publicly available reports to Congress, such as the annual performance and accountability report. In commenting on a draft of this report, the Department of Transportation generally agreed to consider GAO’s recommendations and provided technical corrections that GAO incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 14 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Transportation, the Administrator of the Federal Aviation Administration, and other parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Gerald L. Dillingh Director, Physical Infrastructure Issues am, Ph.D. To determine how the components and practices of the Federal Aviation Administration’s (FAA) human capital system compare with those of leading organizations, we reviewed FAA documents and regulations— which detailed FAA policies and practices in the functional areas of workforce planning, training, recruiting and hiring, performance management, and diversity management. We also reviewed relevant studies by other organizations, including the National Academy of Public Administration. We discussed the structure and processes of FAA’s human capital system, and how the system is addressing FAA’s challenges, with officials from the Office of Human Resource Management and the Office of Civil Rights, and with FAA officials who have responsibility for implementing human capital procedures within each line of business— Commercial Space Transportation, Aviation Safety, Airports, and the Air Traffic Organization. We conducted our interviews with FAA officials at FAA headquarters in Washington, D.C., and, via teleconference, Oklahoma City, Oklahoma. Additionally, we obtained perspectives of organized labor on the human capital system through semistructured interviews with representatives of FAA’s four largest unions—the National Air Traffic Controllers Association (about 19,000 members); the Professional Aviation Safety Specialists (about 11,100 members); the American Federation of State, County, and Municipal Employees (about 2,200 members); and the American Federation of Government Employees (about 1,800 members)— which, collectively, represent about three-fourths of FAA’s workforce. We conducted a high-level comparison of FAA’s practices to leading practices. More detailed comparisons could disclose specific leading practices that FAA is not following, beyond those discussed in this report. We did not assess the effectiveness of FAA’s human capital system, because other factors—outside of FAA’s human capital system—may also affect FAA’s performance. To determine how FAA employees’ workplace satisfaction compares with that of other federal government employees, and what steps FAA is taking to improve workplace satisfaction, we reviewed FAA employee responses to the Office of Personnel Management’s (OPM) biennial Federal Human Capital Surveys for 2004, 2006, and 2008. We specifically analyzed the responses to 16 “impact items” that, according to OPM, make a difference in employee recruitment, motivation, and retention. Through document review and interviews with FAA officials, we determined FAA’s actions and plans to improve employee satisfaction for those items for which FAA employee satisfaction was the farthest behind the rest of the federal government. We conducted this performance audit from May 2008 to October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We also conducted reliability assessments of the data we obtained electronically and determined those data to be of sufficient quality to be used for the purposes of this report. Appendix II: Key Leading Practices and FAA’s Activities in Strategic Workforce Planning, Training, Recruitment and Hiring, and Performance Management Works with stakeholders in annually reviewing and updating the Federal Aviation Administration (FAA) Human Capital Plan. Has executives sign workforce planning documents for the workforces under their purview. Determine the critical skills and competencies that will be needed to achieve current and future programmatic results. Develops workforce plans for major organizational segments and for specific workforces, such as air traffic controllers; plans provide workforce demographics and strategies to address workforce challenges. Strives to achieve strategic alignment among people, goals, and mission accomplishments when annually updating workforce plans. Examines human capital challenges to determine the extent to which the current workforce, systems, and practices meet future business requirements and determines where challenges exist. Developed a 16-competency leadership success profile and revalidates it every 2 years through a managementwide survey. Mandates annual skill assessments for all managers and establishes recurring management training requirements. Uses a competency model for human resource specialists that includes numerous technical and general competencies. Enlisted the National Academy of Public Administration (NAPA) for assistance in determining the workforce competencies needed to lead and implement NextGen. Bases its human capital planning framework on guidance from the Office of Management and Budget, GAO, and the Office of Personnel Management (OPM). Developed an acquisition workforce plan that officials said will include acquisitions competencies, based in part on information from the NAPA Panel’s review. Partnered with agencies across the federal government to establish a federal certification program for program and project managers based on recognition of common, essential competencies. Develop strategies that are tailored to address gaps in the number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies. Established a Senior Leadership Development Program to provide a pipeline of candidates to compete for leadership positions; uses leadership competencies to evaluate candidates for the program. Established a Program for Emerging Leaders to assess, develop, and demonstrate candidates’ management potential; provide structured training; and provide a corporate perspective of FAA; uses leadership competencies to evaluate candidates for the program. Participates in several governmentwide competency analysis efforts for mission critical workforces, such as information technologists, engineers, community planners, and human resource specialists. Identifies gaps in workforce competencies and formulates closure strategies. Established a Human Capital Planning Council to serve as an internal community of practice for workforce planning and to provide a focal point for sharing best practices and disseminating guidance. Engages in knowledge transfer by partnering with OPM, the Department of Transportation, the Chief Human Capital Officers Council, NAPA, GAO, and other government agencies; learning institutions; and the private sector to ensure that the best possible decisions are made through shared lessons learned, feedback, and expertise. Uses a six-stage process that includes aligning human capital policies, practices, and initiatives with the Flight Plan; scanning external trends to identify those that can affect the organization; scanning trends in workforce supply; and establishing and measuring progress against human capital goals. Established a system of accountability that holds the organization, managers, and human resource officers accountable for efficient and effective human resources management. Included accountability as a leadership competency. Established performance metrics in the Flight Plan for the time to fill vacancies, workplace injury rates, grievance processing time, and meeting staffing targets for air traffic controller and safety workforces. Uses an automated system to track performance. Administrator reviews progress on initiatives against performance targets and goals in monthly meetings. Posts performance results against the human capital goals on its Web site. Analyzes the results of employee surveys and takes action to improve specific areas. Establishes training programs that align with agency goals. Requires that training be based on a systematic analysis of the knowledge, skills, and abilities required to achieve the organization’s mission. Determines gaps between desired and current organizational performance, determines if training is the appropriate solution, and, if so, addresses those learning needs in annual business plans. Incorporates training strategies as a significant part of workforce planning efforts by, for example, establishing corporate employee training programs to build leadership competence within the FAA workforce, support professional development, and promote continuous learning. Developed criteria based on industry Instructional Design Standards to determine whether to design training programs internally or use an external source. FAA officials stated that these criteria also capture FAA’s processes for developing training internally. Uses a mixed approach to centralizing management of training programs. Conducts managerial and leadership training on an agencywide basis to enable consistent training across the agency. Conducts technical or professional training on a decentralized basis to accommodate subordinate organizations’ unique requirements. For example, officials from the Commercial Space line of business noted that, because much of its work is very technical and different from that of other lines of business, it designs and conducts much of its training internally. Conducts analyses to choose among different mixes of training delivery mechanisms. For example, FAA officials noted that FAA is moving to a more “blended approach” to training delivery, using computer-based training where possible, which can help ensure that an agencywide audience receives the same message. Implementation: Agency leaders communicate the importance of training and developing employees and foster an environment conducive to effective training and development. Encourages employees to work with their supervisors in creating individual development plans—identifying occupational performance requirements, job- and career-related learning needs, and learning strategies for meeting them—in conjunction with their annual performance plans. Communicates management team commitment to ensuring that adequate funds will be set aside for position-essential training and has noted that funding for training in support of employees’ career development will be provided when possible. Vests accountability for the enhanced performance of the workforce in the Office of Corporate Learning and Development and with line of business executives. The Office of Corporate Learning and Development is responsible for managerial/leadership training and is accountable for improvements in that area. The lines of business are responsible for developing technical and professional training and are accountable for employee technical performance. The Air Traffic Organization, for example, has several layers of training accountability for air traffic controller technical training, starting with the Vice President of Technical Training, who is responsible for Air Traffic technical training. Evaluation: Systematically plan for and evaluate the effectiveness of agency training and development efforts using the appropriate analytic approaches and performance data. Evaluates learning and development activities to determine how well these activities meet short- and long-range program needs. Establishes standards that address the quality of learning and development activities and delivery systems, achievement of learning objectives, impact on performance, accomplishment of organizational requirements and expectations, and written end-of- activity evaluation. Uses performance data from end-of-course evaluations to assess participant reaction, vendor and instructor performance, transfer of learning, learning outcomes, and the effectiveness of participatory learning techniques. Compiles training evaluation results for use in future planning. Assesses leadership skills every spring and uses the results of these assessments to refine the leadership curriculum for the following year. Requires lines of business and staff organizations to plan and justify their training needs as part of the regular budget process. Lines of business are responsible for maintaining records of all training activities, expenditures, and plans. Manages costs through electronic delivery of Web-based courses. Evaluate recruitment strategies to attract applicants. Monitors the effectiveness of the Air-Traffic Selection and Training tool—the screening test for air traffic controller applicants—and has commenced a study aimed at assessing the effectiveness of the tool over the long term. Uses the Chief Human Capital Officers Council Management Satisfaction survey to obtain hiring manager feedback on the hiring process and the quality of applicants. Evaluates the success of its strategies by setting performance targets and reporting on its success in meeting them in annual performance and accountability reports. Refines vacancy announcements so that they better incorporate key competencies and evaluates applicants against these competencies. Uses feedback collected from job applicants to redesign its systems to improve usability. Uses its Automated Staffing and Application system for most externally advertised vacancies to post vacancy announcements, obtain applications, rate and rank applicants, and complete the selection process. Uses on-the-spot hiring authority based on factors such as the number of applications received and the adequacy of job advertising efforts. Uses recruitment incentives when extreme difficulty has existed for a prolonged period of time in attracting an adequate number of candidates, or when necessary to attract a candidate with unique competencies critical to an important agency mission. Pays retention incentives when the unique qualifications of the employee or a special need for the employee’s services makes it essential to retain the employee and the employee would be likely to leave the federal service in the absence of a retention incentive. Use performance management system to improve performance by helping individuals see the connection between their daily activities and organizational goals and encouraging individuals to focus on their roles and responsibilities to help achieve these goals. Aligns individual performance expectations with organizational goals by requiring that each supervisor and employee develop a performance plan that describes employee tasks and responsibilities and provides a line of sight from those items to the goals in the Flight Plan. Clearly defines and widely communicates FAA’s performance expectations for the organization in FAA’s Flight Plan and business plans, and for individuals in employee performance plans. Ties performance-based pay increases to an employee’s “Impact on Organizational Success.” Rates employees on their ability to successfully set priorities and complete work that directly affects the ability of the organization to meet its performance objectives and deliver high-quality products and services. Use competencies to examine individual contributions to organizational results. Competencies, which define the skills and supporting behaviors that individuals are expected to exhibit to carry out their work effectively, can provide a fuller picture of an individual’s performance. Uses two types of performance standards to describe employee’s major responsibilities and expected outcomes. Uses “common” or “generic” performance standards when the work is repetitious in nature and is characterized by consistent processes, standard outcomes and expectations, and is task- or procedurally based. These plans contain standardized outcomes and expectations that apply to all employees doing the same or similar job(s). For example, all air traffic controllers are expected to recognize adverse and emergency situations and take timely corrective actions. Uses a customized plan that applies to specific individuals for their specific duties when work is programmatic in nature and is characterized by unique processes and diverse outcomes, and expectations are project-oriented. Tracks progress toward multiple agency performance targets, as outlined in the Flight Plan, and issues periodic performance reports. Reports the status quarterly on the agency Web site and annually in performance and accountability reports. Create pay, incentive, and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. A key aspect of implementing such a system is making meaningful distinctions in individual performance and appropriately rewarding those who perform at the highest level. Uses a two-phased process to appraise the performance of most employees and distribute performance-based pay increases. Uses a secondary pay decision process to make more meaningful performance distinctions in determining performance-based pay increases. Provides monetary awards for outstanding performance in categories such as ensuring safety, customer service, and leadership. Granted over $5 million in cash awards and over 64,000 hours of time off under the incentive awards program in fiscal year 2007. Established procedures to deal with unacceptable performers, which require that supervisors provide employees with an opportunity to improve before taking performance-based action. Employees are placed on a performance improvement plan, called an “Opportunity to Demonstrate Performance,” for a period of time as specified in the governing labor agreement and, if the employee remains on the plan at the end of the performance cycle, the employee does not receive a performance- based increase. Provide candid and constructive feedback to help individuals maximize their contribution and potential in understanding and realizing the goals and objectives of the organization. Requires that supervisors conduct meetings with their employees halfway through the performance cycle and provide ongoing, informal feedback on the employee’s progress against the performance plan and identify opportunities for improvement. Provides automatic notifications to managers that midpoint feedback and end-of-cycle reviews are due, to help ensure that these activities take place. Disseminated a broadcast message to all FAA managers to remind them of the midcycle review requirement, the due date, and the reasons for the review. Actively involve employees and stakeholders, such as unions or other employee associations, when developing results-oriented performance management systems in order to help improve employees’ confidence and belief in the fairness of the system and increase their understanding and ownership of organizational goals and objectives. Involved employees in Performance Management System design by conducting over 50 focus groups with more than 500 total participants across all lines of business, unions, and pay grades to verify employee survey results and to obtain employee suggestions for improving the system. Provided training for supervisors, managers, and employees when originally implementing its performance management system, by developing a variety of classroom training modules and interactive video sessions. Developed an instructional guide for employees on the performance management system. Developed a desk guide for managers on distributing superior contribution increases. Recently conducted a series of briefings regarding the pay for performance system. Provide adequate safeguards that help to ensure transparency, which can improve the credibility of the performance-based pay system by promoting fairness and trust. Established a process for handling disputes in its performance management system. The process is designed to be collaborative in nature, beginning with the jointly developed performance plans. FAA uses a grievance procedure or bargaining unit contracts, whichever apply, to resolve disputes and/or disagreements that are not resolved through the initial efforts. Publishes information for employees on internal Web sites about the results of performance pay decisions. In addition to the contact named above, Maria Edelstein, Assistant Director; Edmond Menoche, Senior Analyst; Sherwin Chapman; Peter DelToro; Jessica Evans; Colin Fallon; Cynthia Heckmann; Bert Japikse; Janice Latimer; Steven Lozano; Grant Mallie; Belva Martin; Sara Ann Moessbauer; Carol Petersen; Colleen Phillips; Mark Ramage; Beverly Ross; and Kiki Theodoropoulos made significant contributions to this report.
|
Aviation is critical to the nation's economic well-being, global competitiveness, and national security. The Federal Aviation Administration's (FAA) 48,000 employees guide aircraft, oversee safety, and maintain air traffic control equipment. FAA will need these skills and additional expertise to address evolving missions. As requested, GAO reviewed (1) how FAA's human capital system compares with practices of leading organizations and (2) how FAA employees' workplace satisfaction compares with that of other federal government employees. GAO reviewed documents and relevant studies, and interviewed FAA officials who implement human capital procedures and union representatives. GAO also reviewed survey data on workplace satisfaction. FAA's human capital system incorporates many practices used in leading organizations, but the agency's placement near the bottom in best places to work rankings, published by the Partnership for Public Service and American University's Institute for the Study of Public Policy Implementation, could pose challenges to employee recruitment, motivation, and retention. As part of strategic workforce planning, FAA determines the critical skills needed in its workforce and assesses individual worker skill levels. It also follows leading practices in performance management, but FAA officials and union representatives questioned the system's fairness, echoing concerns that they have raised in the past. FAA follows fewer leading practices in diversity management, but has an opportunity to strengthen its efforts as it updates diversity outreach plans. Despite these efforts, FAA ranked 214th out of 216 agencies in 2009 as the best place to work in the federal government, similar to its ranking in 2007. These low rankings could pose obstacles to FAA's efforts to retain its existing workforce and recruit staff with the requisite skills needed to implement the Next Generation Air Transportation System. By fiscal year 2013, FAA projects that 38 percent of its employees who perform work that is critical to FAA's mission will be eligible to retire. While FAA employee responses to governmentwide surveys indicate that they like their work, their responses are considerably less positive than the rest of the federal government regarding other factors that have an impact on employee recruitment, motivation, and retention. The percentage of FAA employees' positive responses regarding communications, involvement in decisions that affect their work, and respect for their leaders were up to 19 points below those of the rest of the federal government. FAA has developed an action plan to improve leadership and create a performance-based culture that could improve employees' workplace satisfaction. However, FAA has not established accountability for the plan's success.
|
Our work has repeatedly shown that mission fragmentation and program overlap are widespread in the federal government. In 1998 and 1999, we found that this situation existed in 12 federal mission areas, ranging from agriculture to natural resources and environment. We also identified, in 1998 and 1999, 8 new areas of program overlap, including 50 programs for the homeless that were administered by 8 federal agencies. These programs provided services for the homeless that appeared to be similar. For example, 23 programs operated by 4 agencies offered housing services, and 26 programs administered by 6 agencies offered food and nutrition services. Although our work indicates that the potential for inefficiency and waste exists, it also shows areas where the intentional participation by multiple agencies may be a reasonable response to a complex public problem. In either situation, implementation of federal crosscutting programs is often characterized by numerous individual agency efforts that are implemented with little apparent regard for the presence of efforts of related activities. In our past work, we have offered several possible approaches for better managing crosscutting programs—such as improved coordination, integration, and consolidation—to ensure that crosscutting goals are consistent; program efforts are mutually reinforcing; and, where appropriate, common or complementary performance measures are used as a basis for management. One of our oft-cited proposals is to consolidate the fragmented federal system to ensure the safety and quality of food. Perhaps most important, however, we have stated that the Results Act could provide the Office of Management and Budget (OMB), agencies, and Congress with a structured framework for addressing crosscutting program efforts. OMB, for example, could use the governmentwide performance plan, which is a key component of this framework, to integrate expected agency-level performance. It could also be used to more clearly relate and address the contributions of alternative federal strategies. Agencies, in turn, could use the annual performance planning cycle and subsequent annual performance reports to highlight crosscutting program efforts and to provide evidence of the coordination of those efforts. OMB guidance to agencies on the Results Act states that, at a minimum, an agency’s annual plan should identify those programs or activities that are being undertaken with other agencies to achieve a common purpose or objective, that is, interagency and cross-cutting programs. This identification need cover only programs and activities that represent a significant agency effort. An agency should also review the fiscal year 2003 performance plans of other agencies participating with it in a crosscutting program or activity to ensure that related performance goals and indicators for a crosscutting program are consistent and harmonious. As appropriate, agencies should modify performance goals to bring about greater synergy and interagency support in achieving mutual goals. In April 2002, as part of its spring budget planning guidance to agencies for preparing the President’s fiscal year 2004 budget request, OMB stated that it is working to develop uniform evaluation metrics, or “common measures” for programs with similar goals. OMB asked agencies to work with OMB staff to develop evaluation metrics for several major crosscutting, governmentwide functions as part of their September budget submissions. According to OMB, such measures can help raise important questions and help inform decisions about how to direct funding and how to improve performance in specific programs. OMB’s common measures initiative initially focused on the following crosscutting program areas: job training and employment, health. We recently reported that one of the purposes of the Reports Consolidation Act of 2000 is to improve the quality of agency financial and performance data. We found that only 5 of the 24 CFO Act agencies’ fiscal year 2000 performance reports included assessments of the completeness and reliability of their performance data in their transmittal letters. The other 19 agencies discussed, at least to some degree, the quality of their performance data elsewhere in their performance reports. To address these objectives, we first defined the scope of each crosscutting program area as follows: Drug control focuses on major federal efforts to control the supply of illegal drugs through interdiction and seizure, eradication, and arrests. Family poverty focuses on major federal efforts to address the needs of families in poverty through programs aimed at enhancing family independence and well-being. We focused on agencies that provide key support and transition tools associated with the income, health, and food support and assistance to poor families. Financial institution regulation focuses on major federal efforts to supervise and regulate depository institutions. Supervision involves monitoring, inspecting, and examining depository institutions to assess their condition and their compliance with relevant laws and regulations. Regulation of depository institutions involves making and issuing specific regulations and guidelines governing the structure and conduct of banking. Public health systems focuses on major federal efforts to prevent and control infectious diseases within the United States. To identify the agencies involved in each area we relied on our previous work and confirmed the agencies involved by reviewing the fiscal year 2001 Results Act performance report and fiscal year 2003 Results Act performance plans for each agency identified as contributing to the crosscutting program area. To address the remaining objectives, we reviewed the fiscal year 2001 performance reports and fiscal year 2003 performance plans and used criteria contained in the Reports Consolidation Act of 2000 and OMB guidance. The act requires that an agency’s performance report include a transmittal letter from the agency head containing, in addition to any other content, an assessment of the completeness and reliability of the performance and financial data used in the report. It also requires that the assessment describe any material inadequacies in the completeness and reliability of the data and the actions the agency can take and is taking to resolve such inadequacies. OMB guidance states that agency annual plans should include a description of how the agency intends to verify and validate the measured values of actual performance. The means used should be sufficiently credible and specific to support the general accuracy and reliability of the performance information that is recorded, collected, and reported. We did not include any changes or modifications the agencies may have made to the reports or plans after they were issued, except in cases in which agency comments provided information from a published update to a report or plan. Furthermore, because of the scope and timing of this review, information on the progress agencies may have made in addressing their management challenges during fiscal year 2002 was not yet available. We did not independently verify or assess the information we obtained from agency performance reports and plans. Also, that an agency chose not to discuss its efforts to coordinate in these crosscutting areas in its performance reports or plans does not necessarily mean that the agency is not coordinating with the appropriate agencies. We conducted our review from September through November 2002, in accordance with generally accepted government auditing standards. As shown in table 1, multiple agencies are involved in each of the crosscutting program areas we reviewed. The discussion of the crosscutting areas below summarizes detailed information contained in the tables that appear in appendixes I through IV. Fourteen million Americans use illegal drugs regularly, and drug-related illness, death, and crime cost the nation approximately $110 billion annually. From 1990 through 1997, there were more than 100,000 drug- induced deaths in the United States. Despite U.S. and Colombian efforts, the illegal narcotics threat from Colombia continues to grow and become more complex. From 1995 through 1999, coca cultivation and cocaine production in Colombia more than doubled and Colombia became a major supplier of the heroin consumed in the United States. Moreover, over time, the drug threat has become more difficult to address. ONDCP was established by the Anti-Drug Abuse Act of 1988 to set policies, priorities, and objectives for the nation's drug control program. The Director of ONDCP is charged with producing the National Drug Control Strategy, which directs the nation's antidrug efforts and establishes a budget and guidelines for cooperation among federal, state, and local entities. ONDCP’s 2001 Annual Report discussed two strategic goals that pertain to controlling the supply of drugs that enter the United States, including (1) “shielding U.S. borders from the drug threat” and (2) “reducing the supply of illegal drugs.” ONDCP reported two performance goals under the strategic goals—reduce the rate of illicit drug flow through transit zones and reduce the shipment rate of illicit drugs from arrival zones and supply zones. For fiscal year 2001, all the agencies we reviewed—Justice, State, Transportation, and Treasury—discussed coordination with other agencies in the area of drug control, although the level of detail varied. For example, Transportation stated that the Coast Guard worked with ONDCP and Customs to finalize an interagency study of the deterrent effect that interdiction has on drug trafficking organizations. Also, Justice reported that it collaborated with Transportation to prosecute cases that relate to maritime drug smuggling. In contrast, State identified the lead and partner agencies it coordinated with to accomplish its goals, but it did not discuss specific coordination efforts. None of the agencies distinguished between coordination efforts that occurred in fiscal year 2001 and those that were planned for fiscal year 2003. None of the agencies reported having met all of their goals and measures relating to drug control in fiscal year 2001. Customs reported that it met eight of its nine measures for seizures of cocaine, marijuana, and heroin. Customs reported that it did not meet its target for number of marijuana seizures. State reported that it met the targets for its goal of increasing foreign governments’ effectiveness in dissolving major drug trafficking organizations and prosecuting and convicting major traffickers. For the other goal—increasing foreign governments’ effectiveness in reducing the cultivation of coca, opium poppy, and marijuana—State did not meet two of its four targets. For its two measures, Transportation reported that it did not establish a target for one, the amount of drugs that are seized or destroyed at sea, and it did not meet its target for the other, the seizure rate for cocaine that is shipped through the transit zone. Justice reported that it exceeded the target for one measure—number of priority drug trafficking organizations dismantled or disrupted by the Drug Enforcement Administration (DEA)—and did not meet one of two targets for the second measure—the number of drug trafficking organizations dismantled by the Federal Bureau of Investigation (FBI). The four agencies we reviewed—Justice, State, Transportation, and Treasury—provided explanations for not meeting their fiscal year 2001 goals that appeared reasonable. For example, Customs which is under Treasury stated that although it did not meet its target for the number of marijuana seizures, it seized more pounds of marijuana in fiscal year 2001 than in any other year. Customs stated that it believes that the number of seizures dropped because of an overall increase in sizes of marijuana loads. Furthermore, it stated that the heightened state of alert on the border following the events of September 11, 2001, might have deterred the entrance into the country of hundreds of smaller, personal-sized loads. However, none of the agencies discussed strategies for achieving the unmet goals and measures in the future. According to their fiscal year 2003 performance plans, the agencies we reviewed expected to make progress on goals similar to those established for fiscal year 2001. All of Treasury’s performance targets were adjusted to reflect higher anticipated levels of performance. Justice and State reflected a mixture of higher and lower anticipated levels of performance. Although its goals remained the same, Transportation had measures that differed from those reported in fiscal year 2001. Justice and Transportation provided strategies that appear reasonably linked to achieving their goals for fiscal year 2003. For the goal of reducing the supply and use of drugs in the United States, Justice stated that the nine Organized Crime Drug Enforcement Task Force (OCDETF) teams would coordinate to develop a national priority target list of the most significant drug and money laundering organizations. As drug organizations are dismantled and more organizations are identified, the OCDETF teams will monitor their progress and modify the target list. To achieve its target for the amount of drugs seized or destroyed at sea—Transportation stated that the Coast Guard will (1) operate along maritime routes to deter attempts to smuggle drugs and (2) finalize an interagency study that focuses on the deterrent impact that interdiction has on drug trafficking organizations. Customs did not discuss any strategies for achieving its fiscal year 2003 goals. State provided only general statements about how it planned to achieve its fiscal year 2003 goals. Justice, Transportation, and Treasury each commented on the overall quality and reliability of its data. For example, in its combined report and plan, Justice states that to ensure that data contained in this document are reliable, each reporting component was surveyed to ensure that the data reported met the OMB standard for data reliability. Data that did not meet this standard were not included in the report and plan. These agencies also discussed the quality of specific performance data in their fiscal year 2001 performance reports to various degrees. In its fiscal year 2001 performance report, Justice provided a discussion of data verification and validation for each performance measure. For example, for the measure of drug trafficking organizations dismantled by the FBI, an FBI field manager reviewed and approved data that were entered into the system and the data were verified through the FBI’s inspection process. Transportation reported that it used data entry software to ensure data quality and consistency by employing selection lists and logic checks. Also, Transportation stated that internal analysis and review of published data by external parties helps identify errors. Furthermore, Customs reported on the completeness, reliability, and credibility of its performance data by discussing how it verifies the data for each performance measure. State did not report on the completeness, reliability, and credibility of its performance data. While Justice, Transportation, and Treasury acknowledged shortcomings in their performance data, they did not report steps to resolve or minimize these shortcomings. Justice reported one shortcoming which was the need to improve its reporting system for one measure—number of priority drug trafficking organizations dismantled or disrupted by DEA. Transportation stated that although data verification and validation occurs several times in the data reporting process, a potential limitation to the accuracy of its data could stem from data duplication and coding errors. Customs reported that while its data could be considered reliable, the data could be subject to input errors or duplicative reporting not identified by reviewers. State did not report on shortcomings in its performance data. Federal government agencies have major programs aimed at supporting families classified as poor. For example, HHS’s Temporary Assistance for Needy Families (TANF) program makes $16.8 billion in federal funds available to states each year. While TANF delegates wide discretion to the states to design and implement the program, it does specify four broad program goals that focus on children and families: providing assistance to needy families so that children may be cared for in their own homes or in the homes of relatives; ending the dependence of needy parents on government benefits by promoting job preparation, work, and marriage; preventing and reducing the incidence of out-of-wedlock pregnancies; encouraging the formation and maintenance of two-parent families. In addition, Agriculture’s Food Stamp Program helps low-income individuals and families obtain a more nutritious diet by supplementing their incomes with food stamp benefits. Agriculture’s Food and Nutrition Service and the states jointly implement the Food Stamp Program, which provided about $15 billion in benefits to over 17 million low-income individuals in the United States during fiscal year 2000. In 1998, Congress passed the Workforce Investment Act (WIA) to consolidate services of many employment and training programs, mandating that states and localities use a centralized service delivery structure—the one-stop center system—to provide most federally funded employment and training assistance. We previously reported that several challenges, including program differences between TANF and WIA and different information systems used by welfare and workforce agencies, inhibit state and local coordination efforts. For example, different program definitions, such as what constitutes work, as well as complex reporting requirements under TANF and WIA hamper state and local coordination efforts. Though some states and localities have found creative ways to work around these issues, the differences remain barriers to coordination for many others. For example, antiquated welfare and workforce information systems are often not equipped to share data with each other, and as a result, sometimes one- stop center staff members have to enter the same client data into two separate systems. Although HHS and Labor have each provided some assistance to the states on how to coordinate services, available guidance has not specifically addressed the challenges that many continue to face. Moreover, HHS and Labor have not addressed differences in program definitions and reporting requirements under TANF and WIA. To address the obstacles to coordination, we recommended that HHS and Labor work together to develop ways to jointly disseminate information on how some states and localities have taken advantage of the flexibility afforded to them under TANF and WIA to pursue coordination strategies to address some of these obstacles to coordination. We also recommended that HHS and Labor, either individually or jointly, promote research that would examine the role of coordinated service delivery on outcomes of TANF clients. The agencies we reviewed generally discussed in their performance reports and plans their efforts to coordinate with other federal agencies on programs that address family poverty. Three major interagency task forces bring all of the agencies we reviewed, plus others, together to coordinate on such programs: (1) the Interagency Council on the Homeless, which includes such federal entities as HUD, HHS, Agriculture, Commerce, Education, Energy, Justice, Labor, Defense, Transportation, Veterans Affairs, the Social Security Administration, the Federal Emergency Management Agency, the General Services Administration, and the U.S. Postal Service, (2) OMB’s Workforce Investment Act Committee, which includes HUD, Labor, HHS, and Education, to address the nation’s employment issues, and (3) the Workforce Excellence Network, which comprises Education, HHS, and Labor, conducts two major national conferences each year, in which Labor is able to “showcase” its best WIA programs. In addition, three of the five agencies we reviewed identified individual coordination efforts outside these task forces and specified the programs on which they coordinated. For example, HHS’s ACF reported that it works with Labor in Welfare-to-Work (WtW) and WIA efforts, Transportation in their Access to Jobs program, Education in providing education and training services, and HUD in providing housing assistance. Agriculture and HHS’s CMS stated that they coordinated with other agencies, but did not specify the agencies or the types of coordination efforts. The agencies we reviewed reported varied progress in achieving their fiscal year 2001 goals and measures. For example, CMS reported meeting two goals, partially meeting one goal, and not meeting a fourth goal related to family poverty. For its goal of promoting self-sufficiency and asset development, HUD reported meeting the targets for seven of its performance indicators, missing or expecting to miss six targets, not having enough data for one target, and establishing baselines for 4 of its 18 performance indicators. Incomplete data prevented Agriculture, ACF, and HUD from reporting on all of their measures. For example, ACF was unable to report on its progress for 18 of its 23 performance indicators related to three of its goals linked to family poverty due to the time lag in receiving and validating data from states, localities, and other program partners. However, ACF was able to report that it fell short in achieving its targets for the 5 performance indicators related to two of its goals: improving the quality of child care and the Head Start Health Status program. All of the agencies provided explanations that appeared reasonable for not meeting their goals. For example, ACF reported that two factors contributed to its failure to meet two of the three targets for its goal of improving Head Start Health Status: (1) a high student turnover rate hindered the students’ receipt of health care despite Head Start’s medical referrals and (2) Medicaid’s inability to cover dental and mental health treatment for Head Start students prevented them from receiving proper care. In addition, these agencies generally provided strategies that appeared reasonably linked to achieving the unmet goals in the future. For example, Labor outlined strategies to address its two unmet goals relating to higher wages for and retention of WtW participants in the workforce and increasing the number of child care apprenticeship programs and apprentices. Specifically, Labor proposed making retention of WtW participants more attractive by increasing grantees’ use of tax credits and continuing the Pathways to Advancement pilot project, which subsidizes employers, upgrades and advances current TANF “alumni,” and validates data at the program level, among other strategies. For their fiscal year 2003 plan, the agencies we reviewed generally set goals similar to those established for fiscal year 2001, but increased the targets to reflect anticipated higher levels of performance. The exception to this consistency was HUD, which reported that the draft of its updated strategic plan for fiscal years 2000 through 2006 affected the fiscal year 2003 performance plan framework. The new framework introduced eight strategic goals, two of which addressed family poverty. Objectives included helping families in public and assisted housing make progress toward self-sufficiency and become homeowners, ending chronic homelessness in 10 years, and helping homeless individuals and families move to permanent housing. Four of the five agencies we reviewed— Agriculture, ACF, HUD, and Labor—provided reasonable strategies for achieving at least one of their fiscal year 2003 goals related to family poverty. For example, Labor lists departmentwide means and strategies for meeting all of its goals, most of which are to continue or improve preexisting efforts. Following the list, Labor describes eight significant new or enhanced efforts in fiscal year 2003. For its goal of having states develop a baseline and methodology for measuring the immunization of 2- year-old children under Medicaid, CMS discusses time frames for the development of each state’s baseline measure and reporting methodology, but it does not describe specific strategies for how it intends to achieve its targets for this area. All of the agencies we reviewed addressed data quality issues in some form, although the degree to which such issues were addressed varied. Three of the five agencies—Agriculture, HUD, and Labor—included a broad statement at the beginning or end of their reports or plans stating that the reported data were generally reliable. Because all of the agencies we reviewed rely on data from the states and other grantees to report on performance for at least one of their goals, they reported on the difficulty of obtaining quality data in a timely manner. However, all of the agencies reported that they have methods for reviewing the performance data for consistency and completeness. For example, CMS stated that it had built- in quality assurance checks, technical consultants, and a review of data by CMS personnel. In addition, the agencies generally acknowledged shortcomings in the data and discussed steps they were taking to resolve or minimize the shortcomings. For example, HUD reported that it is discontinuing or updating the 18 performance indicators we reviewed in its fiscal year 2001 report because of its inability to address data reliability issues and because the connection between the indicators and the outcome measure was unknown, among other reasons. For the estimated data, HUD stated that accurate numbers would be reported in its fiscal year 2002 performance report if adjustments were necessary. Financial regulation of depository institutions in the United States is a highly complex system. Federal responsibilities for regulation and supervision are assigned to five federal regulators: FDIC, the Board of Governors of the Federal Reserve System, NCUA, OCC, and OTS. FDIC is the primary federal regulator and supervisor for federally insured state- chartered banks that are not members of the Federal Reserve System and for state savings banks whose deposits are federally insured. The Board is the federal regulator and supervisor for bank-holding companies and is the primary federal regulator for state-chartered banks that are members of the Federal Reserve System. OCC is the primary regulator of federally chartered banks or national banks. OTS is the primary regulator of all federal and state-chartered thrifts whose deposits are federally insured and their holding companies. NCUA is the primary federal regulator for credit unions. A primary objective of federal depository institution regulators is to ensure the safe and sound practices and operations of individual depository institutions through regulation and supervision. Regulation of depository institutions involves making and issuing specific regulations and guidelines governing the structure and conduct of banking. Supervision involves the monitoring, inspecting, and examining of depository institutions to assess their condition and their compliance with relevant laws and regulations. Each federal depository regulator is responsible for its respective institutions; for example, the Board examines and regulates state member banks and OCC examines and regulates national banks. Although the Board, FDIC, OCC, OTS, and NCUA are responsible for specific depository institutions, all of the agencies have similar oversight responsibilities for developing and implementing regulations, conducting examinations and off-site monitoring, and taking enforcement actions for those institutions that are under their respective purview. To ensure that depository institutions are receiving consistent treatment in examinations, enforcement actions, and regulatory decisions, coordination among the regulators is essential. In 1979, Congress established the Federal Financial Institutions Examination Council (FFIEC) to promote uniformity in the supervision of depository institutions by the Board, FDIC, NCUA, OCC, and OTS. It is a formal interagency body empowered to prescribe uniform principles, standards, and report forms for the federal examination of financial institutions and to make recommendations to promote uniformity in the supervision of financial institutions. Generally, the performance reports and plans of the federal depository institutions regulators discussed possible coordination on crosscutting goals. The performance reports and plans of FDIC, OCC, and OTS described the types of coordination that they conduct with the other regulators. The Board’s 2002-2003 plan includes a section on interagency coordination of crosscutting issues. For instance, the section of the plan entitled, “Interagency Coordination of Crosscutting Issues” stated that the Board formally coordinates with other federal depository institutions regulators through the FFIEC and its participation with the Results Act Financial Institutions Regulatory Working Group, a coordinating committee of the depository institution regulators to address and report on issues of mutual concern. The performance report and plan of NCUA did not include any discussion of coordination efforts with the other federal depository institution regulators. In 2001 and 2002, the federal depository institution regulators jointly issued guidance or regulations on a number of occasions. For example, the regulators often jointly issue guidance in areas such as the risks of brokered and other rate-sensitive deposits, temporary balance sheet growth, clarification on the accounting and reporting for loans held for sale, and consumer privacy. In addition, earlier this year, the federal depository institution regulators jointly issued proposed regulations to implement section 326 of the USA Patriot Act on customer identification. In 2001, they jointly issued guidelines on safeguarding confidential customer information. On the basis of their fiscal year 2001 performance reports, all the federal depository institution regulators reported they made progress in achieving their fiscal year 2001 goals for the supervision and regulation function. The Board, FDIC, and OCC each reported meeting all of their goals except for one related to the examinations of depository institutions that were due for a safety and soundness examination in 2001. However, each of the three agencies provided a reasonable explanation for not achieving the goal. FDIC was unable to examine 11 banks that were scheduled for an examination for the following reasons: some institutions merged or converted their charters, some institutions moved into or changed their capital categories requiring a change in examination intervals, and one institution converted its information system. The Board did not meet its goal because it failed to complete 17 bank examinations, as required by statute and on the basis of their financial condition in 2001, but the Board provided an appropriate reason for the delay—scheduling problems with state bank regulatory agencies. The Board reported that it is implementing a new scheduling system that will partially resolve these problems. NCUA reported it generally met its performance goals, although out of its four strategic goals, it missed one out of five outcome goals for two and was unable to report on most of the outcome goals for another. OTS reported meeting all of its goals. On the basis of their fiscal year 2003 performance plans, three of the five federal depository institution regulators designed strategies to achieve their performance goals that appear to be reasonable. Similar to the fiscal year 2001 performance reports, the performance goals focused on the scheduling of examinations under specific time frames, enforcement actions, and reviewing compliance with consumer protection statutes relating to consumer financial transactions. The Board’s performance plan outlined strategies that appeared reasonably linked to achieving its goals and objectives for promoting a safe, sound, competitive, and accessible banking system. For example, the Board’s plan proposed focusing on the areas of highest risk, promoting sound risk management practices, understanding and accommodating the effects of financial innovation and technology, improving international banking and supervisory practices, and refining and strengthening the foreign bank organizations program, among other strategies. The FDIC performance plan included a strategy for achieving its planned performance goals that also appeared reasonable. For example, FDIC plans to analyze examination-related data collected in the System of Uniform Reporting of Compliance and Community Reinvestment Act (CRA) Examination to determine whether it achieved targeted performance levels during the reporting period. In its performance plan, OCC discussed strategies for each of its strategic goals. OTS discussed general strategies, which were not clearly linked to particular performance goals. Of the five regulators, only the performance reports of OCC and OTS commented on the completeness, reliability, and credibility of the data for the supervision and regulation function. OCC’s performance report for fiscal year 2001 concluded the data were accurate for some of the performance measurements used in the report. In its fiscal year 2001 performance report, OTS concluded that the data for its performance measures met standards for accuracy and auditability. The performance reports issued by the Board, FDIC, and NCUA did not discuss whether the performance data for the supervision and regulation areas used in the reports were complete, reliable, and credible. None of their performance reports commented on the potential shortcomings of these data. Broadly speaking, federal involvement in the area of public health systems encompasses a mix of efforts to maintain the health of a diverse population, such as directly providing health services, regulating prescription drugs, or paying for medical services provided to the aged and the needy. In this report, we focused one aspect of the public health system—federal efforts to prevent and control infectious diseases within the United States. The spread of infectious diseases is a public health problem once thought to be largely under control. However, outbreaks over the last decade illustrate that infectious diseases remain a serious public health threat. For example, foodborne disease in the United States annually causes an estimated 76 million illnesses, 325,000 hospitalizations, and about 5,000 deaths, according to the Centers for Disease Control and Prevention (CDC). The resurgence of some infectious diseases is particularly alarming because previously effective forms of control are breaking down. For example, some pathogens (disease-causing organisms) have become resistant to antibiotics used to bring them under control or have developed strains that no longer respond to the antibiotics. The need for concerted efforts to prevent such diseases is critical to reducing this threat to the public. We have previously reported on various aspects of protecting public health, such as ensuring the vaccination of children through the Vaccines for Children program and limitations in several of CDC’s foodborne disease surveillance systems. Agriculture and each of the five components of HHS we reviewed—CDC, CMS, FDA, HRSA, and NIH—discussed in their performance reports and performance plans coordination efforts with other agencies related to preventing infectious diseases. For example, CDC reported that it coordinated with (1) Agriculture and FDA on its food safety programs, (2) HRSA, CMS, FDA, and NIH, among others, on its immunization objectives, and (3) NIH and FDA on the development of new diagnostic and treatment tools and better vaccines for tuberculosis. Also, Agriculture reported that it coordinated with HHS and the Environmental Protection Agency regarding the goal to protect the public health by reducing the incidence of foodborne illnesses. However, none of the agencies discussed specific details about the coordination. According to its combined fiscal year 2001 performance report and fiscal year 2003 performance plan, NIH was the only agency that reported achieving its public health systems goal—to develop new or improved approaches for preventing or delaying the onset or progression of disease and disability. Agriculture, FDA, and CDC each reported missing some of its performance targets. In addition, CDC, CMS, and HRSA lacked data to report on some or all of their performance goals for fiscal year 2001. For example, HRSA indicated that the performance data for its goal—increase the proportion of the national AIDS education and training center (AETC) interventions provided to minority health care providers—will not be collected until February 2003. Three agencies—CDC, FDA, and Agriculture—provided explanations for not meeting a measure or goal that appeared reasonable. For example, FDA reported that it missed its target—inspect 90 percent of high-risk domestic food establishments each year—because the agency purposefully diverted resources for these inspections to focus on the even greater threat of bovine spongiform encephalopathy that was breaking out in Europe at the time. None of these agencies discussed strategies to achieve the unmet goals in the future. For fiscal year 2003, HHS’s CDC, CMS, FDA, and HRSA, and Agriculture, reported they expect to make progress on goals that were generally the same as those they reported on in fiscal year 2001. NIH developed two new subgoals for its goal of developing new or improved approaches to preventing or delaying the onset or progression of disease and disability, but did not indicate targets for the new goals. CDC developed a new goal of conducting research to identify and assess community-based prevention interventions. HRSA plans to drop one of its goals—“increase the number of minority health care and social service providers who receive training in AETCs”—because measuring the percentage of training interventions provided to minority health providers was determined to be a more accurate and appropriate method to measure the program’s progress in training health care providers. CMS and HRSA reported that they expected to achieve higher levels of performance for all of their targets. CDC, FDA, and Agriculture planned for a mixture of higher and lower levels of performance in fiscal year 2003. Agriculture and three of the five HHS components we reviewed discussed strategies that appeared reasonably linked to achieving their fiscal year 2003 goals. For example, Agriculture reported that its performance goal— create a coordinated national and international food safety risk management system to ensure safety of U.S. meat and poultry—has a set of specifically outlined strategies to follow in order to accomplish the goal, including (1) develop national performance standards for ready-to-eat meat and poultry items, (2) ensure food safety requirements are followed by monitoring slaughter and process plants, and (3) increase reviews of foreign inspection systems to ensure the safety of imported meat, poultry, and egg products. In contrast, NIH and HRSA did not discuss strategies for achieving their fiscal year 2003 goals. Agriculture and NIH commented on the overall quality and reliability of the performance data in their fiscal year 2001 performance reports. For example, NIH progress toward meetings its goals was assessed by its GPRA Assessment Working Group, which reviewed the performance data. In addition, CDC, CMS, and Agriculture discussed aspects of data quality for each of their performance measures. For example, CDC’s combined report and plan addresses data verification and validation for each data source corresponding to each goal. FDA and HRSA discussed narrow aspects of data quality for certain measures. FDA and HRSA acknowledged shortcomings in their performance data and reported steps to resolve or minimize those shortcomings. For example, FDA stated that existing public health data systems are not adequate to provide accurate and comprehensive baseline data needed to draw direct relationships between FDA’s regulatory activities and changes in the number and types of foodborne illnesses that occur annually in the United States. Therefore, through coordination with CDC and Agriculture, FDA reported developing an improved food safety surveillance program called FoodNet. HRSA reported limitations related to its HIV/AIDS data collection efforts. For example, the reporting system that holds the data contains duplicate data about individuals that prevents accurate conclusions from being made. To minimize the limitations, HRSA reported it allows grantees the option of participating in a client-level reporting system. CDC and CMS acknowledged shortcomings in their data but did not discuss steps to minimize the shortcomings. NIH and Agriculture did not discuss any limitations to their performance data in the area of public health systems. We have previously stated that the Results Act could provide OMB, agencies, and Congress with a structured framework for addressing crosscutting program efforts. In its guidance, OMB clearly encourages agencies to use their performance plans as a tool to communicate and coordinate with other agencies on programs being undertaken for common purposes to ensure that related performance goals and indicators are consistent and harmonious. We have also stated that the Results Act could also be used as a vehicle to more clearly relate and address the contributions of alternative federal strategies. The President’s common measures initiative, by developing metrics that can be used to compare the performance of different agencies contributing to common objectives, appears to be a step in this direction. Some of the agencies we reviewed appear to be using their performance reports and plans as a vehicle to assist in collaborating and coordinating program areas that are crosscutting in nature. Those that provided more detailed information on the nature of their coordination provided greater confidence that they are working in concert with other agencies to achieve common objectives. Other agencies do not appear to be using their plans and reports to the extent they could to describe their coordination efforts to Congress, citizens, and other agencies. Furthermore the quality of the performance information reported—how agencies explain unmet goals and discuss strategies for achieving performance goals in the future, and overall descriptions of the completeness, reliability, and credibility of the performance information reported—varied considerably. Although we found a number of agencies that provided detailed information about how they verify and validate individual measures, only 5 of the 10 agencies we reviewed for all the crosscutting areas commented on the overall quality and reliability of the data in their performance reports consistent with the requirements of the Reports Consolidation Act. Without such statements, performance information lacks the credibility needed to provide transparency of government operations so that Congress, program managers, and other decision makers can use the information. We sent drafts of this report to the respective agencies for comments. We received comments from Agriculture, the Board, FDIC, HHS, HUD, Labor, and Treasury, including OCC and OTS. The agencies generally agreed with our findings. The comments we received were mostly technical and we have incorporated them where appropriate. Regarding drug control, Justice, through its Office of Legal Policy, commented that, as of November 2002, Justice had formalized increased cooperation with ONDCP on drug policy and operations. Regarding public health systems, the NIH component of HHS commented that the prevention goal GAO looked at is one of five goals that together that give a comprehensive picture of the performance of NIH’s research program. Furthermore, NIH commented that there are many formal and informal ways in which it coordinates its work in the prevention arena that are not reflected in its performance plan. For example, NIH cites the Next- Generation Smallpox Vaccine Initiative, an intradepartmental task force consisting of representatives from the Office of Public Health Policy, CDC, FDA, and NIH. We acknowledge this limitation in the scope and methodology section of the report. Regarding family poverty, HUD commented that, although GAO’s review focused on two of HUD’s eight goals, it believes all of its goals and many of its indicators have an impact on family poverty. We do not dispute HUD’s assertion that many of its goal address family poverty broadly. However, we focused on the goals that appeared to be most directly related to the scope we defined in our scope and methodology section. Regarding financial institution regulation, FDIC commented that a lack of specific reference in the performance report regarding the completeness, reliability and credibility of the data should not lead to a negative inference. We are sending copies of this report to the President, the Director of the Office of Management and Budget, the congressional leadership, other Members of Congress, and the heads of major departments and agencies. In addition, the report will be available at no charge on the GAO Web site at http:// www.gao.gov. If you have any questions about this report, please contact me or Elizabeth Curda on (202) 512-6806 or [email protected]. Major contributors to this report are listed in appendix V. In addition to the individual named above, the following individuals made significant contributions to this report: Steven J. Berke, Lisa M. Brown, Amy M. Choi, Peter J. Del Toro, Nancy M. Eibeck, and Debra L. Johnson. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
|
GAO's work has repeatedly shown that mission fragmentation and program overlap are widespread in the federal government. Implementation of federal crosscutting programs is often characterized by numerous individual agency efforts that are implemented with little apparent regard for the presence and efforts of related activities. GAO has in the past offered possible approaches for managing crosscutting programs, and has stated that the Government Performance and Results Act could provide a framework for addressing crosscutting efforts. GAO was asked to examine the actions and plans agencies reported in addressing the crosscutting issues of drug control, family poverty, financial institution regulation, and public health systems. GAO reviewed the fiscal year 2003 performance plans for the major agencies involved in these issues. GAO did not independently verify or assess the information it obtained from agency performance reports and plans. On the basis of the reports and plans, GAO found the following: (1) Most agencies involved in the crosscutting issues discussed coordination with other agencies in their performance reports and plans, although the extent of coordination and level of detail provided varied considerably; (2) Most of the agencies we reviewed reported mixed progress in achieving their fiscal year 2001 goals--meeting some goals, missing others, or not reporting on progress. Some of the agencies that did not meet their goals provided reasonable explanations and/or strategies that appeared reasonably linked to meeting the goals in the future; and (3) The agencies GAO reviewed generally planned to pursue goals in fiscal year 2003 similar to those in 2001, although some agencies added new goals, dropped existing goals, or dropped goals altogether. Many agencies discussed strategies that appeared to be reasonably linked to achieving their fiscal year 2003 goals.
|
Subject to the authority, direction, and control of the Secretary of Defense, each military service (Army, Navy, Marine Corps, and Air Force) has the responsibility to recruit and train a force to conduct military operations. In fiscal year 2006, DOD committed over $1.5 billion to its recruiting effort. Each service, in turn, has established a recruiting command responsible for that service’s recruiting mission and functions. The services’ recruiting commands are similarly organized, in general, to accomplish the recruiting mission. Figure 1 illustrates the organization of the recruiting commands from the senior headquarters level through the recruiting station where frontline recruiters work to contact prospective applicants and sell them on military service. Region (Western) Region (Eastern) Group 1 Group 2 Group 3 Group 4 (few sites) (several sites) (numerous sites) Each service has at least two levels of command between the senior headquarters and the recruiting station where frontline recruiters work to contact prospective applicants for military service. The Army Brigades, Navy and Marine Corps Regions, and Air Force Groups are subordinate commands of their service recruiting command and have responsibility for recruiting operations in large portions of the country. The Navy and Marine Corps organize their servicewide recruiting commands into Eastern and Western Regions that more or less divide responsibilities east and west of the Mississippi River. The Army, in comparison, has five Brigades and the Air Force has four Groups based regionally across the country that are responsible for their recruiting operations. These commands are further divided into local levels responsible for coordinating the frontline recruiting efforts. These 41 Army Battalions, 26 Navy and 6 Marine Corps Districts, and 28 Air Force Squadrons are generally organized around market demographics, including population density and geographic location. Finally, the 1,200 to 2,000 recruiting stations per service or in the case of the Marine Corps—the substations—represent that part of the recruiting organization with which the general public is most familiar. Of the approximately 22,000 total military recruiters in fiscal year 2006, almost 14,000 are frontline recruiters who are assigned a monthly recruiting goal. The recruiter’s monthly goal varies by service, but is generally 2 recruits per month. The remaining recruiters—roughly 8,000— hold supervisory and staff positions throughout the services’ recruiting commands. Table 1 provides a summary of the average number of recruiters by service for fiscal years 2002 through 2006 broken out by total number of recruiters and frontline recruiters who have a monthly recruiting goal. A typical frontline military recruiter is generally a midlevel enlisted noncommissioned officer in the rank of Army and Marine Corps Sergeant (E-5) or Staff Sergeant (E-6), Navy Petty Officer Second Class (E-5) or First Class (E-6), and Air Force Staff Sergeant (E-5) or Technical Sergeant (E-6), who is between the ages of 25 and 30 years old and has between 5 and 10 years of military service. While some frontline recruiters volunteer for recruiting as a career enhancement, others are selected from among those the services have identified as their best performers in their primary military specialties. All services have comprehensive selection processes in place and specific eligibility criteria for recruiting duty. For example, recruiters must meet service appearance standards, have a stable family situation, be able to speak without any impairment, and be financially responsible. The services screen all prospective recruiters by interviewing and conducting personality assessments and ensuring the prospective recruiters meet all criteria. To augment its uniformed recruiters, the Army also uses contract civilian recruiters, and has been doing so under legislative authority since fiscal year 2001. This pilot program, which authorizes the Army to use civilian contractors, will run through fiscal year 2007. The goal of the program is to test the effectiveness of civilian recruiters. If civilian recruiters prove effective, this would allow the Army to retain more noncommissioned officers in their primary military specialties within the warfighting force. Currently, the Army is using almost 370 contract civilian recruiters, representing approximately 3 percent of the Army’s total recruiting force. In general, training for frontline recruiters is similar in all services and has focused on ethics and salesmanship, with a growing emphasis placed on leadership and mentoring skills to attract today’s applicant. Each service conducts specialized training for approximately 6 weeks for noncommissioned officers assigned as recruiters. The number of hours of training time specifically devoted to ethics training as a component of the recruiter training curriculum ranges from 5 hours in the Navy to 34 hours of instruction in the Army. After recruiters successfully convince applicants on the benefits of joining the military, they complete a prescreening of the applicant, which includes an initial background review and a physical and moral assessment of the applicant’s eligibility for military service. After the recruiter’s prescreening, the military pays for the applicant to travel to 1 of 65 military entrance processing stations (MEPS) located throughout the country. At the processing stations, which are under the direction of DOD’s Military Entrance Processing Command, processing station staff administer the Armed Services Vocational Aptitude Battery, a test to determine whether the applicant is qualified for enlistment and a military job specialty, and conduct a medical examination to determine whether the applicant meets physical entrance standards. After the processing station staff determine that an applicant is qualified, the applicant signs an enlistment contract and is sworn into the service and enters the delayed entry program. When an applicant enters the delayed entry program, he or she becomes a member of the Individual Ready Reserve, in an unpaid status, until reporting for basic training. An individual may remain in the delayed entry program for 1 day up to 1 year. Just before reporting for basic training, the applicant returns to the processing station, undergoes a brief physical examination, and is sworn into the military. Figure 2, in general, illustrates the recruiting process from a recruiter’s initial contact with a prospective applicant to the applicant’s successful graduation from the service’s initial training school, commonly referred to as basic training. DOD and the services have limited visibility to determine the extent to which recruiter irregularities are occurring. The Office of the Under Secretary of Defense (OUSD) for Personnel and Readiness has the responsibility for overseeing the recruiting program. However, OUSD has not established a framework to conduct oversight of recruiter irregularities and provide guidance requiring the services to maintain data on recruiter wrongdoing. Although not required by OUSD to do so, the services require their recruiting commands to maintain data for 2 years; the Army Recruiting Command maintains data for 3 years and can retrieve case files back to fiscal year 1998. Furthermore, OUSD has not established criteria for the services to characterize recruiter irregularities or developed common terminology for irregularities. Accordingly, the services use different terminology, which makes it difficult to compare and analyze data across the services. Moreover, each of the services uses multiple systems for maintaining data that are not integrated and decentralized processes for identifying and tracking allegations and service-identified incidents of recruiter irregularities. Perhaps most significantly, none of the services accounts for all allegations or incidents of recruiter irregularities. Therefore, service data likely underestimate the true number of recruiter irregularities. Nevertheless, our analysis of service data suggests that most allegations are not substantiated. Effective federal managers continually assess and evaluate their programs to provide accountability and to assure that they are well designed and operated, appropriately updated to meet changing conditions, and achieving program objectives. Specifically, managers need to examine internal control to determine how well it is performing, how it may be improved, and the degree to which it helps identify and address major risks for fraud, waste, abuse, and mismanagement. According to the mission statement for the Office of the Under Secretary of Defense for Personnel and Readiness, its responsibilities include reviewing and evaluating plans and programs to ensure adherence to approved policies and standards, including DOD’s recruitment program. OUSD officials stated that they review service recruiter irregularity issues infrequently usually in response to a congressional inquiry, and they do not perform oversight of recruiter irregularities. OUSD has not issued guidance requiring the services to maintain data on recruiter irregularities. Nevertheless, the services require their recruiting commands to maintain data on recruiter irregularities for 2 years; the Army Recruiting Command maintains data for 3 years and can retrieve case files dating back to fiscal year 1998. Moreover, OUSD has not established or provided criteria to the services for how they should characterize various recruiter irregularities and has not developed common terminology because it responds to individual inquiries and, in general, uses the terminology of the service in question. Accordingly, the services use different terminology to refer to recruiter irregularities. How the services categorize the irregularity affects how they maintain data on recruiter irregularities. For example, the Army uses the term impropriety while the Navy, Marine Corps, and Air Force use the term malpractice to characterize the intentional enlistment of an unqualified applicant. Only the Army uses the term recruiter error to describe those irregularities not resulting from malicious intent or gross negligence. Consequently, if DOD were to require services to report on recruiter wrongdoing, the Army might not include its recruiter error category because these cases are not willful violations of recruiting policies and procedures and the Army does not identify such cases as substantiated or unsubstantiated in their data system. The Air Force uses the term procedural error to refer to an irregularity occurring as a result of an administrative error by the recruiter due to lack of knowledge or inattention to detail. If DOD were to require services to report on recruiter wrongdoing, the Air Force might not include its procedural error category because these cases are not intentional acts to facilitate the recruiting process for an ineligible applicant. In both cases, however, wasted taxpayer dollars result; unintentional recruiter errors can have the same effect as intentional recruiter irregularities because both result in inefficiencies in the recruiting process. DOD’s need for oversight may become more critical if the department decides to rely more heavily on civilian contract recruiters in the future. As we previously stated, the civilian recruiter pilot program currently authorizes the Army to use civilian recruiters, through fiscal year 2007, to test their effectiveness. Future reliance on civilian recruiters, in any service, would allow a service to retain more noncommissioned officers in their primary military specialties. However, OUSD would also need to be in a position to assure that this type of change is well designed and operated, and that its recruiting programs are appropriately updated to reflect a change in recruiting operations. None of the services can readily provide a comprehensive and consolidated report on recruiter irregularities within their own service because they use multiple systems that are not integrated. Currently, the services use systems that range from electronic databases to hard-copy paper files to track recruiter irregularities and do not have a central database dedicated to compiling, monitoring, and archiving information about recruiter irregularities. When we asked officials in each of the services for a comprehensive report of recruiter irregularities that occurred within their own service, they were unable to readily provide these data. Officials had to query and compile data from separate systems. For example, the Navy Recruiting Command had to access paper files for allegations of recruiter irregularities, while the Air Force Judge Advocate provided information from an electronic database from which we were able to extract cases specifically related to recruiter irregularities. Furthermore, the services cannot assure the reliability of their data because the services lack standardized procedures for recording data, their multiple systems use different formats for maintaining data, and in some instances the services do not conduct quality reviews or edit checks of the data. The services used the following systems to maintain data on recruiter irregularities at the time of our review: Army: The Army maintains three separate data systems that contain information about recruiter irregularities. The Army Recruiting Command’s Enlistment Standards Division has a database that houses recruiting irregularities that pertain to applicant eligibility. The Army Recruiting Command Inspector General maintains a separate database that houses other irregularities, including recruiter misconduct that may result in nonjudicial punishment. The Judge Advocate maintains hard- copy case files for recruiter irregularities that are criminal violations of the recruiting process that may result in judicial punishment. Navy: The Navy maintains four separate data systems that contain information about recruiter irregularities. The Naval Inspector General, the Navy Bureau of Personnel Inspector General, and the Navy Recruiting Command Inspector General all maintain some data on allegations of recruiter irregularities. The Naval Criminal Investigative Service investigates and maintains data on Navy criminal recruiting violations. Marine Corps: The Marine Corps Recruiting Command maintains two systems that track information on recruiting irregularities, one that captures reported allegations and another that only tracks the disposition of allegations and service-identified incidents that a commander or recruiting official at some level in the recruiting command structure determined to merit an inquiry or investigation. The Naval Criminal Investigative Service investigates and maintains data on Marine Corps criminal recruiting violations. Air Force: The Air Force maintains three separate databases with information about recruiter irregularities. The Air Force Recruiting Service Inspector General maintains a database that houses data on allegations of recruiter irregularities. The liaison from the Air Force Recruiting Service, located at the Air Force basic training site, maintains data within a separate electronic system on allegations of recruiter irregularities that applicants raise about their recruiters when they report to basic training. The Air Force Judge Advocate maintains a database containing criminal violations of recruiting practices and procedures. At the time of our review, Navy officials told us they believe there is value in having servicewide visibility over the recruiting process and they plan to improve their systems for maintaining data on recruiter irregularities. Navy officials stated that the Navy Bureau of Personnel Inspector General is working with the Navy Recruiting Command Inspector General and the Naval Education and Training Command to develop a system that maintains recruiting and training data that will include allegations and service-identified incidents of recruiter irregularities. Marine Corps officials told us they are in the process of improving their systems for maintaining data on recruiter irregularities by merging all data on allegations and service-identified incidents of recruiter irregularities into one database that can be accessed at all command levels of the Marine Corps Recruiting Command. An Air Force official told us that as a result of our review, the Air Force modified its system for capturing allegations and service-identified incidents surfacing at basic training by improving its ability to query the system for information on the type of allegation or incident and whether or not it was a substantiated case of recruiter wrongdoing. Where and how an irregularity is identified will often determine where and how it will be resolved. The services identify an allegation or incident of recruiter wrongdoing in a number of ways. These include input from service hotlines, internal inspections, congressional inquiries, and data collected by DOD’s Military Entrance Processing Command. The services’ recruiting command headquarters typically handle allegations and service- identified incidents of recruiter irregularities that surface through any of these means during the recruiting process. At other times, allegations surface in the recruiting process at command levels below the service recruiting command headquarters, and commanders at the Army Battalion, Navy and Marine Corps District, and Air Force Squadron level handle allegations that typically surface during supervisory reviews at the recruiting stations and substations. We were unable to determine the extent of these allegations, however, because the service recruiting commands do not maintain complete data. For example, Military Entrance Processing Command officials, responsible for assessing an applicant’s moral, mental, and physical eligibility for military service, stated that they forward all allegations and service-identified incidents of recruiter irregularities that surface during the screening process at the military entrance processing station to the services’ recruiting commanders. However, officials also stated that the services’ recruiting commanders do not provide feedback to them regarding the disposition of these cases. In fact, the services’ recruiting command headquarters data did not show records of allegations and service-identified incidents of recruiter irregularities received from the Military Entrance Processing Command. Additionally, each service provides applicants an opportunity to disclose any special circumstances relating to their enlistment process, including allegations of recruiter wrongdoing, when they enter basic training. Army and Air Force officials told us that they record all allegations of recruiter irregularities made by applicants at basic training. Army Recruiting Command officials stated that liaison officers at each of the basic training installations forward all allegations received from applicants to the Army Recruiting Command Enlisted Standards Division to record in its database. The Air Force implemented a new database in fiscal year 2005 specifically to record and resolve all allegations and service-identified incidents of recruiter wrongdoing that surface at basic training. The Navy and Marine Corps, on the other hand, do not record all allegations of recruiter irregularities made by applicants at basic training. Navy: The Navy gives applicants a final opportunity to disclose any irregularity that they believe occurred in their recruiting process when they arrive at basic training. The Recruiting Command Inspector General has the authority to investigate allegations or service-identified incidents of recruiter wrongdoing and uses its Navy Recruit Quality Assurance Team to conduct the final Navy recruiting quality assurance check before applicants begin basic training. In turn, the Assurance Team generates reports on allegations raised by applicants who claim they were misled during the recruiting process and submits its reports to the Navy Recruiting Command Inspector General. Navy recruiting command officials explained that the Inspector General investigates those allegations that the Assurance Team, based on the professional judgment and experience of its team members, recommends for further investigation. The Navy Recruiting Command Inspector General, however, does not maintain data on allegations that it does not investigate. The Assurance Team also sends its reports to the Navy Recruiting District Commanders who are responsible for overseeing the recruiters who appear on the reports. The District Commanders use the Assurance Team’s reports to monitor recruiter wrongdoing. Again, however, the District Commanders do not provide feedback to the Assurance Team as to how they resolve these allegations, nor do they report this information to the Navy Recruiting Command Inspector General unless they deem the case to merit further investigation or judicial processing. Moreover, the Assurance Team members do not record allegations of wrongdoing as a recruiter irregularity in those cases where they can easily resolve the discrepancy by granting an applicant an enlistment waiver to begin basic training. Assurance Team officials told us that they believe that some recruiters encourage applicants to conceal potentially disqualifying information until they arrive at basic training because the recruiters perceive that it is relatively easy to process a waiver at basic training. In addition, these same officials told us that this behavior saves recruiters the burden of collecting supporting documentation and expedites the time it takes a recruiter to sign a contract with an applicant and complete the recruiting process. Marine Corps: The Marine Corps also gives applicants a final opportunity to disclose any irregularity that they believe occurred in their recruiting process prior to beginning basic training. However, the Marine Corps’ Eastern and Western Recruiting Region staff use different criteria to handle allegations of recruiter irregularities that they cannot corroborate. Recruiting staff at the Eastern Region basic training site in Parris Island, South Carolina, enter all allegations applicants make against recruiters, while recruiting staff at the Western Region basic training site in San Diego, California, only enter those allegations that a third party can verify. A Marine Corps Recruiting Command official told us that, as a result of our review, Marine Corps officials discussed accounting procedures for allegations of recruiter irregularities at the command’s national operations conference held in May 2006. The official further stated that the Marine Corps Recruiting Command’s goal is to standardize procedures to account for all allegations of recruiter irregularities. Existing data suggest that substantiated cases of recruiter wrongdoing make up a small percent of all allegations and service-identified incidents, although, for reasons previously cited, we believe the service data likely underestimate the true number of recruiter irregularities. Substantiated cases of recruiter irregularities are those cases in which the services determined a recruiter violated recruiting policies or procedures based on a review of the facts of the case. (A more detailed discussion of the procedures that are in place to address substantiated cases of recruiter irregularity are discussed later in this report.) While the services cannot assure that they have a complete accounting of recruiter irregularities, the data that they reported to us are instructive in that they show the number of allegations, substantiated cases, and criminal violations increased overall from fiscal year 2004 to fiscal year 2005. At the same time, the number of accessions into the military decreased from just under 250,000 in fiscal year 2004 to about 215,000 in fiscal year 2005. Table 2 shows that, DOD-wide, the services substantiated about 10 percent of all allegations and service-identified incidents of recruiter irregularities. The services categorized cases as substantiated when the preponderance of the evidence supported the allegation of wrongdoing against a recruiter. Similarly, the services categorized cases as unsubstantiated when the preponderance of the evidence did not support the allegation against a recruiter. Table 3 shows the number of recruiter irregularities that were criminal violations of the recruiting process and addressed by the services’ Judge Advocate or criminal investigative service. The number of criminal violations in the recruiting process increased in fiscal year 2005; however, in both fiscal years, this number represented approximately 1 percent of all allegations and service-identified incidents of recruiter irregularities. The large increase in the number of Navy cases in fiscal year 2005 is likely a result of a special investigation where four cases led to nine additional cases of criminal wrongdoing. Table 4 shows that on average, the percentage of substantiated cases of recruiter wrongdoing compared to the number of actual accessions was under 1 percent in each service during the past 2 fiscal years. Table 5 shows that when we compared the number of substantiated cases of recruiter wrongdoing to the number of frontline recruiters, 4.7 percent of recruiters would have had a substantiated case against them in fiscal year 2005 if each recruiter who committed an irregularity had committed only one. (However, this is not to say that 4.7 percent of frontline recruiters committed an irregularity, given that some recruiters may have committed more than one irregularity). Without an oversight framework to provide complete and reliable data, DOD and the services are not in a position to gauge the extent of recruiter irregularities or when corrective action is needed, nor is the department in a sound position to give Congress and the general public assurance that recruiter irregularities are being addressed. A number of factors within the current recruiting environment may contribute to recruiting irregularities. Such factors include the economy, ongoing hostilities in Iraq, and fewer applicants who can meet military entrance standards. These factors, coupled with the typical difficulties of the job and pressure to meet monthly recruiting goals, challenge the recruiter and can lead to recruiter irregularities in the recruiting process. Data show that as the end of the monthly recruiting cycle draws near, the number of recruiter irregularities may increase. Among a number of factors that contribute to a challenging recruiting environment are the current economic situation and the ongoing hostilities in Iraq. Service recruiting officials told us that the state of the economy, specifically the low unemployment rate, has had the single largest effect recently on meeting recruiting goals. These officials stated DOD must compete harder for qualified talent to join the military when the economy is strong. According to U.S. Department of Labor, Bureau of Labor Statistics data, the national unemployment rate fell each year between 2003 (when it was at 6 percent) and 2005 (when it was 5.1 percent). In fiscal year 2005, three of the eight active and reserve components we reviewed—the Army, Army Reserve, and Navy Reserve—failed to meet their recruiting goals. Recruiters also believe that the ongoing hostilities in Iraq have made their job harder. Results of a DOD internal survey show that almost three- quarters of active duty recruiters agreed with the statement that current military operations made it hard for them to achieve recruiting goals and missions. Recruiters we interviewed expressed the same opinion. DOD has found that the public’s perceptions about military enlistment have changed because youth and their parents believe that deployment to a hostile environment is very likely for servicemembers with some types of military specialties. Officials further stated that adults who influence a prospective applicant’s decision about whether to join the military are increasingly fearful of the possibility of death or serious injury to the applicant. Recruiters also must overcome specific factors that routinely make their job hard. Recruiters told us that their work hours were dictated by the schedules of prospective high school applicants, which meant working most evenings and weekends. Almost three-quarters of active duty recruiters who responded to DOD’s survey stated that they worked more than 60 hours a week on recruiting or recruiting-related duties. Other factors that affect the recruiting environment include a recruiter’s location and access to eligible applicants. For example, service officials stated that it was easier to recruit in or near locations with a military presence. Recruiters also have difficulty finding eligible applicants. DOD researchers have estimated that over half of U.S. youth aged 16 to 21 are ineligible to join the military because they cannot meet DOD or service entry standards. DOD officials stated that the inability to meet medical and physical requirements accounts for much of the reason youth are ineligible for military service. Additionally, many youth are ineligible because they cannot meet service standards for education, as indicated by DOD’s preference for recruits with a high school diploma; mental aptitude, as indicated by receipt of an acceptable score on the armed forces vocational aptitude test; and moral character, as indicated by few or no criminal convictions or antisocial behavior. All of these factors contribute to a difficult recruiting environment that is challenging for recruiters to succeed. Pressure to meet monthly goals contributes to recruiter dissatisfaction. Over 50 percent of active duty military recruiters responding to the 2005 internal DOD survey stated that they were dissatisfied with their jobs. Approximately two-thirds of Army recruiters reported that they were dissatisfied with recruiting, while over a third of Air Force recruiters stated they were dissatisfied. The Navy and Marine Corps rates of recruiter dissatisfaction fell within these extremes, with just under half of Navy and Marine Corps recruiters reporting that they were dissatisfied with their jobs. When asked in this same survey if they would select another assignment if they had the freedom to do so, over three-quarters of active duty DOD recruiters said they would not remain in recruiting. On the one hand, the services expect recruiters to recruit fully qualified personnel; while on the other hand, the services primarily evaluate recruiters’ performance on the number of contracts they write, which corresponds to the number of applicants who enter the delayed entry program each month. In 2005, over two-thirds of those active duty recruiters responding to the internal DOD survey believed that their success in making their monthly quota for enlistment contracts had a make-or-break effect on their military career. Over 80 percent of Marine Corps recruiters held that opinion, as did almost two-thirds of Army and over half of Air Force recruiters. Navy officials stated that individual recruiters are not tasked with a monthly goal; rather, the goal belongs to the recruiting station as a whole. Still, approximately two-thirds of Navy recruiters responding to DOD’s survey indicated they felt their careers were affected by their success in making their individual recruiting goal. The recruiters who we interviewed also believed their careers were affected by how successful they were in achieving monthly recruiting goals. Recruiters, like all servicemembers, receive performance evaluations at least once a year. Our review of service performance evaluations and conversations with the services’ recruiting command officials show that Army, Navy, and Air Force recruiter evaluations are not directly linked to an applicant successfully completing his or her service’s basic training course. Instead, we found that the Army, Navy, and Air Force generally evaluate recruiters on their ability to achieve their monthly goal to write contracts to bring applicants into the delayed entry program. The Army’s civilian contractor recruiters, for example, receive approximately 75 percent of their monetary compensation for recruiting an applicant when that applicant enters the delayed entry program and the remaining 25 percent of their compensation when the applicant begins basic training. The Army’s contract, therefore, does not tie compensation to the applicant’s successful completion of basic training and joining the Army. Even though Navy officials told us that recruiters do not have individual goals because the monthly mission is assigned to the recruiting station, Navy performance metrics include data on the number of contracts written. However, the Navy does not hold recruiters directly accountable for attrition rates from either the delayed entry program or basic training. Marine Corps recruiters, unlike recruiters in the other services, are held accountable when an applicant does not complete basic training and remain responsible for recruiting an additional applicant to replace the former basic trainee. Marine Corps recruiter evaluation performance standards measure both the number of contracts written each month as well as attrition rates of applicants from the delayed entry program and basic training. Marine Corps Recruiting Command officials stated that they believe their practice of holding recruiters accountable for attrition rates helps to limit irregularities because recruiters are likely to perform more rigorous prescreening of applicants to ensure that a recruit is likely to complete Marine Corps basic training. In fact, Military Entrance Processing Command data show that Marine Corps recruiters have been the most consistently successful of all service recruiters at prescreening and processing applicants through their initial physical assessments, subsequently maintaining applicants’ physical eligibility while in the delayed entry program, and finally ensuring that applicants pass the final physical assessment and enter basic training. Table 6 shows the low medical disqualification rate of the Marine Corps in comparison with the other services. In addition to performance evaluations, the services provide awards to recruiters that are generally based on the number of contracts that a recruiter writes, rather than on the number of applicants that graduate from basic training and join the military. We reported in 1998 that only the Marine Corps and the Navy used recruits’ basic training graduation rates as key criteria when evaluating recruiters for awards. Recruiters in some services and other service recruiting command officials stated their belief that recruiters who write large numbers of contracts over and above their monthly quota are almost always rewarded. Such rewards can include medals and trophies for recruiter of the month, quarter, or year; preferential duty stations for their next assignment; incentives such as paid vacations; and meritorious promotion to the next rank. When unqualified applicants are recruited or when applicants who lack eligibility documentation are processed through the military entrance processing station in the effort to satisfy end-of-month recruiting cycle goals, wasted taxpayer dollars result. For example, the Army spends approximately $17,000 to recruit and process one applicant, and as much as $57,500 to recruit and train that applicant through basic training. We continue to believe our 1997 and 1998 recommendations to the Secretary of Defense have merit. Specifically, we recommended that the Secretary of Defense require all the services to review and revise their recruiter performance evaluation and award systems to strengthen incentives for recruiters to thoroughly prescreen applicants and to more closely link recruiting quotas to applicants’ successful completion of basic training. The department concurred with our recommendations in order to enhance recruiter success and help recruiters focus on DOD’s strategic retention goal, and it indicated that the Secretary of Defense would instruct the services to link recruiter awards more closely to recruits’ successful completion of basic training. Our review shows that the Army, Navy, and Air Force have not implemented this recommendation. DOD Military Entrance Processing Command officials told us that they believe data from the Chicago military entrance processing station for the first 6 months of fiscal year 2006 indicate that it may be possible to anticipate when irregularities may occur. While service data show that the numbers of irregularities that occur in the recruiting process are relatively small when compared with the total number of applicants that access into the military, the Chicago station data suggest that recruiter irregularities increase as the end of the monthly recruiting cycle nears and recruiting goals are tallied. The end-of-month recruiting cycle for the Army occurs midmonth and data from DOD’s Chicago processing station show that irregularities peaked at the midmonth point. Figure 3 illustrates the increase in recruiter irregularities that occurred at the Chicago station at the end of the Army’s monthly recruiting cycle. We present Army data because the Chicago station processes more applicants for the Army than it does for the other services. However, Chicago station data show similar results for the Navy, Marines, and Air Force. When we asked U.S. Military Entrance Processing Command officials for data from the other stations, they said that the other stations did not maintain these data and that this data collection effort was the initiative of the Chicago station commander. We believe these data can be instructive and inform recruiting command officials whether monthly goals have an adverse affect on recruiter behaviors, and if so, whether actions to address increases in irregularities near the end of the monthly recruiting cycle may be necessary. The services have standard procedures in place, provided in the Uniform Code of Military Justice and service regulations, to investigate allegations and service-identified incidents of recruiter irregularities and to prosecute and discipline recruiters found guilty of violating recruiting policies and procedures. Each service recruiting command has a designated investigative authority to handle allegations of irregularities, and the services’ respective Judge Advocates have primary responsibility for adjudicating criminal violations of the recruitment process. Moreover, each service has mechanisms by which to update its recruiter training as a result of information on recruiter irregularities. As previously discussed, the services identify allegations and service- identified incidents of recruiter wrongdoing in a number of ways. Allegations made or discovered at the Army Battalion, Navy and Marine Corps District, and Air Force Squadron command level are generally resolved by that commander using administrative actions and nonjudicial punishment under authority granted by the Uniformed Code of Military Justice. The commander forwards allegations and service-identified incidents of recruiter irregularities arising at that level that he or she deems sufficiently egregious to require further investigation, or as service regulations require, to the service recruiting command or to the Judge Advocate for judicial processing of possible criminal violations in the recruitment process. Commanders in the service recruiting commands, like all commanders throughout the military, exercise discretion in deciding whether a servicemember should be charged with an offense, just as prosecutors do in the civilian justice system. Army Battalion, Navy and Marine Corps District, and Air Force Squadron commanders initiate a preliminary inquiry into allegations of wrongdoing against recruiters after receiving a report of a possible recruiter irregularity. When the preliminary inquiry is complete, the commander must make a decision on how to resolve the case. The commander can decide that no action is warranted or take administrative action, such as a reprimand or counseling. The commander can also decide to pursue nonjudicial punishment under Article 15 of the Uniform Code of Military Justice, or refer the case to trial and decide what charges will be brought against the recruiter. Limitations in data we previously discussed prevent a thorough review of how services discipline recruiters found guilty of violating recruiting policies and procedures. In addition, we found that in some cases, the services did not document the disciplinary action a commander took against a recruiter. Even though service data are not complete, data the Army provided allow us to illustrate the range of disciplinary actions commanders may take to resolve cases of recruiter irregularities. These actions range from counseling a recruiter for an irregularity up to discharge from the Army. For example, in fiscal year 2005, Army data show that commanders imposed disciplinary actions ranging from a verbal reprimand to court martial for recruiters who concealed an applicant’s medical information. Service recruiting officials stated that the range of possible disciplinary actions a commander may impose is mitigated by the circumstances of each case, including the recruiter’s overall service record, duty performance, and number of irregularities the recruiter may have previously committed. Table 7 summarizes disciplinary actions taken against Army recruiters in the past 2 fiscal years for specific kinds of irregularities. All of the services have mechanisms for updating their recruiter training as a result of information on recruiter irregularities. These mechanisms include internal inspection programs and routine recruiter discipline reports. The services also react to reassure public confidence in the recruiting process when specific incidents or reports of recruiter irregularities become widely known. Each service recruiting command assesses and evaluates how recruiting policies and procedures are being followed, the results of which are focused on training at the Army Battalion, Navy and Marine Corps District, and Air Force Squadron command level. For example, the Navy Recruiting Command’s National Inspection Team conducts unannounced inspections at the Navy recruiting districts and forwards the results of the inspection to the Navy Recruiting Command headquarters. The Navy Recruiting Command’s National Training Team follows up by conducting refresher training at the recruiting station locations or in the subject areas where the National Training Team identified discrepancies. The Marine Corps’ National Training Team also conducts periodic inspections and training based on the results of their inspections. Additionally, the Marine Corps National Training Team provides input and guidance to the Marine Corps recruiter school course curriculum. The Air Force Recruiting Command Judge Advocate distributes quarterly recruiter discipline reports to heighten awareness of wrongdoing and encourage proper recruiter behavior. In addition, these reports are used to show examples of wrongdoing during new recruiter training. The Army Recruiting Command conducted commandwide refresher training on May 20, 2005, in response to a series of press reports of recruiters using inappropriate tactics in their attempts to enlist new servicemembers. The Army stated that the training goal was to reinforce that recruiting operations must be conducted within the rules and regulations and in accordance with Army values. Military recruiters represent the first point of contact between potential servicemembers and those who influence them—their parents, coaches, teachers, and other family members. Consequently, a recruiter’s actions can be far reaching. Although existing data suggest that the overwhelming number of recruiters are not committing irregularities and irregularities are not widespread, even one incident of recruiter wrongdoing can erode public confidence in DOD’s recruiting process. Existing data show, in fact, that allegations and service-identified incidents of recruiter wrongdoing increased between fiscal years 2004 and 2005. DOD, however, is not in a position to answer questions about these allegations and service-identified incidents because it does not know the true extent to which the services are tracking recruiter irregularities or addressing them. Moreover, DOD is unable to compile a comprehensive and consolidated report because the services do not use consistent terminology regarding recruiter irregularities. Individual service systems are not integrated, processes are decentralized, and many allegations are undocumented. Although DOD officials can point to external factors, such as a strong economy and current military operations in Iraq as recruiting challenges, data suggest that internal requirements to meet monthly recruiting goals may also contribute to recruiter irregularities. Having readily available, complete, and consistent data from the services would place DOD in a better position to know the nature and extent of recruiter irregularities and identify opportunities when corrective action is needed. To improve DOD’s visibility over recruiter irregularities, we recommend that the Secretary of Defense take the following action: Direct the Under Secretary of Defense for Personnel and Readiness to establish an oversight framework to assess recruiter irregularities and provide overall guidance to the services. To assist in developing its oversight framework, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to take the following three actions: Establish criteria and common definitions across the services for maintaining data on allegations of recruiter irregularities. Establish a reporting requirement across the services to help ensure a full accounting of all allegations and service-identified incidents of recruiter irregularities. Direct the services to develop internal systems and processes that better capture and integrate data on allegations and service-identified incidents of recruiter irregularities. To assist DOD in developing a complete accounting of recruiter irregularities, we further recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to take the following action: Direct the commander of DOD’s Military Entrance Processing Command to track and report allegations and service-identified incidents of recruiter irregularities to the Office of the Under Secretary of Defense for Personnel and Readiness. Such analysis would include irregularities by service and the time during the monthly recruiting cycle when the irregularities occur. In written comments on a draft of this report, DOD concurred with three of our recommendations that address the need for an effective oversight management framework to improve DOD's visibility over recruiter irregularities. While DOD partially concurred with our recommendation to establish a reporting requirement across the services and did not concur with our recommendation for the Military Entrance Processing Command to provide OSD with data on recruiter irregularities, the department did not disagree with the substance of these recommendations. Rather, DOD indicated that it would implement these recommendations if it determined such requirements were necessary. DOD's comments are included in this report as appendix II. DOD concurred with our recommendations to establish an oversight framework to assess recruiter irregularities and provide overall guidance to the services; to establish criteria and common definitions across the services for maintaining data on recruiter irregularities; and for the services to develop internal systems and processes that better capture and integrate data on recruiter irregularities. DOD partially concurred with our recommendation to establish a reporting requirement across the services to help ensure a full accounting of recruiter irregularities, but agreed that some type of reporting requirement be established. The department believes that implementing this recommendation may be premature until it has established an over-arching management framework to provide oversight that uses like terms for recruiter irregularities, and that the requirement and frequency should be left to the judgment of the Office of the Under Secretary of Defense for Personnel and Readiness. DOD stated its intent to establish an initial reporting requirement to ensure the processes it develops are functioning as planned and to use this time period to assess the severity of recruiter irregularities issues. DOD further stated that regardless of whether or not it establishes a fixed reporting requirement, the services will be required to maintain data on recruiter irregularities in a format that would facilitate timely and accurate reports upon request. We do not believe it would be premature to establish a reporting requirement at this time. As we stated in our report, data that the services reported to us show that the number of allegations, substantiated cases, and criminal violations all increased from fiscal year 2004 to fiscal year 2005. Without a reporting requirement, we believe it would be difficult for OUSD to identify trends in recruiter irregularities and determine if corrective action is needed. Accordingly, we continue to believe that a reporting requirement for the services would help the Office of the Under Secretary of Defense for Personnel and Readiness to carry out its responsibilities to review DOD's recruitment program to ensure adherence to approved policies and standards. The department did not concur with our recommendation for DOD's Military Entrance Processing Command to track and report allegations and incidents of recruiter irregularities to OUSD because it believed this reporting would duplicate service reporting, and added that we had stated that recruiter irregularities are not widespread. However, DOD acknowledged, as our report points out, that even one incident of recruiter wrongdoing can erode public confidence in the recruiting process and agreed to consider this recommendation at a later date if it determines that recruiter irregularities are a significant problem and further analyses are required. While we did conclude from the data services provided to us that recruiter wrongdoing did not appear to be widespread, we also stated our belief that service data likely underestimate the true number of recruiter irregularities, and further concluded that DOD is not in a position to answer questions about these allegations and service-identified incidents because it does not know the full extent to which the services are tracking recruiter irregularities or addressing them. We believe, therefore, that the significance of recruiter irregularities is not fully understood, and that addressing this recommendation should not be delayed. As we reported, Military Entrance Processing Command officials told us that they forward all allegations and service-identified incidents of recruiter irregularities that surface during the screening process at the military entrance processing stations to the services' recruiting commands. We found, however, that the services’ recruiting command headquarters data do not show records of allegations and service-identified incidents of recruiter irregularities received from the Military Entrance Processing Command. Data currently captured by the Military Entrance Processing Command would be instructive, particularly because these data show an increase in irregularities as Army recruiters approach the end of their monthly recruiting cycle, and we believe that these data would further inform DOD about the effectiveness of the oversight management framework it has agreed to establish. As arranged with your office, unless you publically announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to interested congressional members; the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staff have any questions regarding this report, please contact me at (202) 512-5559 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. To conduct our work, we examined Department of Defense (DOD) and military services’ policies, regulations, orders, and instructions that govern the recruitment of military servicemembers and the investigation and resolution of allegations and service-identified incidents of recruiter wrongdoing. We also reviewed recruiting-related reports issued by GAO, DOD, and the services. We analyzed data on allegations and service- identified incidents of recruiter irregularities from the active and reserve components of the Army, Navy, Marine Corps, and Air Force databases, reports, and individual paper files. Additionally, we interviewed individuals at several DOD and service offices and recruiters in each service, and visited a number of recruiting and recruiting-related commands. In the course of our work, we contacted and visited the organizations and offices listed in table 8. To assess the extent to which DOD and the services have visibility over recruiter irregularities, we examined DOD and service policies, procedures, regulations, and instructions related to recruiting. In addition, we interviewed officials in the Office of the Under Secretary of Defense for Personnel and Readiness and the services’ recruiting officials and Inspectors General to obtain an understanding of various aspects of the data DOD and the services collect on allegations and service-identified incidents of recruiting irregularities. We obtained data on recruiter irregularities from service recruiting commands’ Inspectors General or other designated recruiting command offices, the Headquarters Air Force Recruiting Service Basic Training Inspector General Liaison, the Naval Criminal Investigative Service, and the recruiting commands’ Staff Judge Advocates. Specifically, within each service, we analyzed fiscal years 2004 and 2005 data. For the Army, we obtained data on allegations and service-identified incidents of recruiter irregularities from its Recruiting Improprieties All Years database. We also obtained data on recruiting irregularities that were processed as criminal violations from the Army Recruiting Command Judge Advocate’s paper files. For the Navy, we obtained data on allegations and service-identified incidents of recruiter irregularities from the Naval Inspector General’s Case Management Information System, the Navy Bureau of Personnel Inspector General, the Navy Recruiting Command Inspector General’s paper files, and the Navy Recruiting Quality Assurance Team. We also obtained data on Navy recruiter criminal violations from the Navy’s Criminal Investigative Service. For the Marine Corps, we obtained data on allegations and service- identified incidents of recruiter irregularities from its Marine Corps Recruiting Information Support System. We also obtained data on recruiter criminal violations from the Navy’s Criminal Investigative Service data system. For the Air Force, we obtained data on allegations and service-identified incidents of recruiter irregularities from its Automated Case Tracking System and Trainee Tracking System, and data on criminal violations from its Automated Military Justice Administrative Management System. We also obtained data from the Air Force Reserve Command Recruiting Service’s Headquarters Queries database. To identify the factors within the current recruiting environment that may contribute to recruiting irregularities, we reviewed prior GAO work, Congressional Research Service reports addressing the recruiting environment, and the 2005 DOD Recruiter Quality of Life Survey Topline Report. We reviewed the sampling and estimation documentation for this survey and determined that it conforms to commonly accepted statistical methods for probability samples; the response rate for the DOD internal survey was 46 percent. Because DOD did not conduct a nonresponse bias analysis, we cannot determine whether estimates from this survey may be affected by nonresponse bias. Such bias might arise if nonrespondents’ answers to survey items would have been systematically different from those of respondents. We reviewed service policies and processes governing recruiter selection, training, and performance evaluation, and interviewed key service officials about the types of challenges that exist in the recruiting environment and the methods used to evaluate recruiter performance. Additionally, we gathered and analyzed statistical information from the Department of Labor and reviewed Military Entrance Processing Command data on the frequency and occurrence of applicant disqualifications by service and reports on recruiter irregularities. Finally, we interviewed officials at the U.S. Military Entrance Processing Command and two military entrance processing stations regarding recruiter irregularities. To identify what procedures DOD and the services have in place to address individuals involved in recruiting irregularities, we examined service case data and spoke with service recruiting command officials to determine how services imposed disciplinary action and what, if any, other actions they took to mitigate wrongdoing in the recruiting process. For each service, we obtained data on disciplinary actions imposed for cases of recruiter irregularities but specifically examined and analyzed Army data as they appeared to be the most comprehensive. We present these data for fiscal years 2004 and 2005. We also reviewed service regulations and the Uniform Code of Military Justice to understand departmentwide standards and the authorities that are granted to commanders to administer military justice. Finally, we reviewed service training materials and spoke with service recruiting command officials to identify other ways services use information on recruiter wrongdoing to try to mitigate errors and irregularities in the recruiting process. To assess the reliability of the services’ data on allegations and service- identified incidents of recruiter irregularities, we interviewed officials about the processes used to capture data on recruiter irregularities, the controls over those processes, and the data systems used; and we reviewed documentation related to those systems. Based on responses to our questions, follow-up discussions, and the documentation we reviewed, we found limitations in many service data systems, including reliance on paper files; databases that cannot be fully queried, if at all; and in some cases, lack of edit checks and data quality reviews. Although we identified weaknesses in the available data, we determined, for the purposes of this report, that the data were reliable for providing limited information on recruiter irregularities. In addition to those named above, David E. Moser, Assistant Director, Grace A. Coleman, Tanya Cruz, Nicole Gore, Gregg J. Justice III, Mitchell B. Karpman, Warren Lowman, Julia C. Matta, Charles W. Purdue, and Shana Wallace made key contributions to this report.
|
The viability of the All Volunteer Force depends, in large measure, on the Department of Defense's (DOD) ability to recruit several hundred thousand individuals each year. Since the involvement of U.S. military forces in Iraq in March 2003, several DOD components have been challenged in meeting their recruiting goals. In fiscal year 2005 alone, three of the eight active and reserve components missed their goals. Some recruiters, reportedly, have resorted to overly aggressive tactics, which can adversely affect DOD's ability to recruit and erode public confidence in the recruiting process. GAO was asked to address the extent to which DOD and the services have visibility over recruiter irregularities; what factors may contribute to recruiter irregularities; and what procedures are in place to address them. GAO performed its work primarily at the service recruiting commands and DOD's Military Entrance Processing Command; examined recruiting policies, regulations, and directives; and analyzed service data on recruiter irregularities. DOD and the services have limited visibility to determine the extent to which recruiter irregularities are occurring. DOD, for example, has not established an oversight framework that includes guidance requiring the services to maintain and report data on recruiter irregularities and criteria for characterizing irregularities and establishing common terminology. The absence of guidance and criteria makes it difficult to compare and analyze data across services and limits DOD's ability to determine when corrective action is needed. Effective federal managers continually assess and evaluate their programs to provide accountability and assurance that program objectives are being achieved. Additionally, the services do not track all allegations of recruiter wrongdoing. Accordingly, service data likely underestimate the true number of recruiter irregularities. Nevertheless, available service data show that between fiscal years 2004 and 2005, allegations and service-identified incidents of recruiter wrongdoing increased, collectively, from 4,400 cases to 6,500 cases; substantiated cases increased from just over 400 to almost 630 cases; and criminal violations more than doubled from just over 30 to almost 70 cases. The department, however, is not in a sound position to assure Congress and the general public that it knows the full extent to which recruiter irregularities are occurring. A number of factors within the recruiting environment may contribute to irregularities. Service recruiting officials stated that the economy has been the most important factor affecting recruiting success. Almost three-quarters of active duty recruiters responding to DOD's internal survey also believed that ongoing hostilities in Iraq made it hard to achieve their goals. These factors, in addition to the typical challenges of the job, such as demanding work hours and pressure to meet monthly goals, may lead to recruiter irregularities. The recruiters' performance evaluation and reward systems are generally based on the number of contracts they write for applicants to enter the military. The Marine Corps is the only service that uses basic training attrition rates as a key component of the recruiter's evaluation. GAO previously recommended that the services link recruiter awards and incentives more closely to applicants' successful completion of basic training. DOD concurred with GAO's recommendation, but has not made this a requirement across the services. The services have standard procedures in place, provided in the Uniform Code of Military Justice and service regulations, to investigate allegations of recruiter irregularities and to prosecute and discipline recruiters found guilty of violating recruiting policies and procedures. In addition, to help recruiters better understand the nature and consequences of committing irregularities in the recruitment process, all services use available information on recruiter wrongdoing to update their training.
|
USDA must ensure that its programs are being implemented efficiently and services are being delivered effectively. To do so, USDA must review the progress it has made in achieving program goals and developing strategies to address any gaps in performance and accountability. To help USDA, and other agencies, address the challenge of improving the performance and accountability of their programs, GPRAMA creates several new leadership structures and responsibilities aimed at sustaining attention on improvement efforts. For example, the act designates the deputy head of each agency as Chief Operating Officer (COO), who has overall responsibility for improving the performance and management of the agency. The act also requires each agency to designate a senior executive as Performance Improvement Officer (PIO) to support the COO. USDA, along with other agencies, is to continue to develop annual performance goals that will lead to the accomplishment of its strategic goals. In addition, the head of each agency must now identify priority goals. These goals must (1) reflect the priorities of the agency and be informed by the federal government’s priority goals and consultations with Congress; (2) have ambitious targets that can be achieved within 2 years; (3) have a goal leader responsible for achieving each goal; and (4) have quarterly performance targets and milestones. In addition, at least quarterly, the agency head, COO, and PIO are to coordinate with relevant personnel who contribute to achieving the goal, from within and outside the agency; assess whether relevant organizations, program activities, regulations, policies, and other activities are contributing as planned to achieving the goal; categorize goals by their risk of not being achieved; and, for those at greatest risk, identify strategies to improve performance. Our recent work has identified challenges to be met in improving program performance and accountability in the Forest Service and domestic food assistance programs. Forest Service. In March 2011, we testified that the Forest Service had not fully resolved performance accountability concerns that we raised in a 2009 testimony. As we noted, the agency’s long-standing performance accountability problems include an inability to link planning, budgeting, and results reporting. In other words, the Forest Service could not meaningfully compare its cost information with its performance measures. We also testified that while the Forest Service, along with Interior agencies that have responsibilities for fighting wildland fires, had taken steps to help contain wildland fire costs, they had not yet clearly defined their cost-containment goals or developed a strategy for achieving these goals—steps we first recommended in 2007. Agency officials identified several agency documents that they stated clearly define goals and objectives and that make up their strategy to contain costs. However, these documents lacked the clarity and specificity needed by officials in the field to help manage and contain wildland fire costs. We therefore continue to believe that the Forest Service will be challenged in managing its cost containment efforts and in improving its ability to contain wildland fire costs until the agency clearly defines its cost-containment goals and strategy for achieving them. Domestic food assistance programs. Our work on domestic food assistance programs—an area where three federal agencies administer 18 programs, consisting of more than $90 billion in spending in fiscal year 2010—suggests not enough is known about the effectiveness of these programs. Research we reviewed suggests that participation in seven of the USDA food assistance programs we examined, including four of the five largest—Special Supplemental Nutrition Program for Women, Infants, and Children; the National School Lunch Program; the School Breakfast Program; and the Supplemental Nutrition Assistance Program—is associated with positive health and nutrition outcomes consistent with the programs’ goals. These goals include raising the level of nutrition among low-income households, safeguarding the health and well-being of the nation’s children, and strengthening the agriculture economy. However, little is known about the effectiveness of the remaining 11 programs—9 of which are USDA programs— because they have not been well studied. GAO suggested that USDA consider which of the lesser-studied programs need further research, and USDA agreed to consider the value of examining potential inefficiencies and overlap among smaller programs. USDA must effectively coordinate with many groups within and outside of the agency to achieve its missions. GPRAMA establishes a new framework aimed at taking a more crosscutting and integrated approach to focusing on results and improving performance—within agencies and across the federal government. At the governmentwide level, the act requires the Director of the Office of Management and Budget (OMB), in coordination with executive branch agencies, to develop—every 4 years—long-term, outcome-oriented goals for a limited number of crosscutting policy areas. On an annual basis, the Director of OMB is to provide information on how these long-term goals will be achieved, and agencies are to describe how they are working with each other to achieve the crosscutting goals. Additional GPRAMA requirements could lead to improved coordination and collaboration for achieving agency-level goals as well. For example, the act requires each agency to identify the various organizations and program activities—within and external to the agency—that contribute to each of its goals. Also, as described earlier, GPRAMA requires top leadership and program officials to be involved in quarterly reviews, and to assess whether these organizations and program activities are contributing as planned to the agency’s priority goals. Based on our prior work, we have identified the following examples that illustrate how improving coordination within USDA or across agencies has contributed or could contribute to the improved performance of USDA programs. Farm program agencies. In September 2005, we reported on the need for improved coordination, including information-sharing and communication, between the Risk Management Agency (RMA) and Farm Service Agency (FSA). Under USDA guidance, RMA is to provide FSA with a list of farmers who have had anomalous crop insurance losses or who are suspected of poor farming practices. Staff in FSA county offices review these cases for potential fraud, waste, and abuse by inspecting the farmers’ fields and then referring the results of these inspections to RMA. However, we found FSA conducted about 64 percent of the inspections RMA requested, and FSA offices in nine states did not conduct any of the field inspections RMA requested in 1 or more of the years in our review. We also found that FSA may not be as effective as possible in conducting field inspections because RMA does not share with FSA information on the nature of anomalous crop insurance losses and suspected poor farming practices, or the results of follow-up inspections. In addition, FSA state officials told us that inspectors are reluctant to conduct field inspections because they believe RMA and insurance companies that administer the crop insurance program do not use the information to deny claims for farmers who do not employ good farming practices. In view of these weaknesses, we made a number of recommendations to RMA and FSA to improve the effectiveness of field inspections. In response, RMA implemented most of our recommendations, but FSA stated that it does not have sufficient resources to complete all field inspections. We expect to report in fiscal year 2012 on the results of our work currently under way in this area examining whether coordination between the agencies has improved. Veterinarian workforce. Our past work has indicated that problems with USDA’s management of its veterinarian workforce have contributed to competition among USDA agencies for these staff. Veterinarians play a vital role in the defense against animal diseases— whether naturally or intentionally introduced—and these diseases can have serious repercussions for the health of animals and humans, and for the nation’s economy. However, there is a growing shortage of veterinarians nationwide—particularly those veterinarians who care for animals raised for food, serve in rural communities, and are trained in public health. We reported in February 2009 that this shortage has the potential to place human health, the economy, and nation’s food supply at risk. Specifically, we found that USDA had not assessed the sufficiency of its veterinarian workforce departmentwide, despite the fact that its agencies that employed mission-critical veterinarians were currently experiencing shortages or anticipating shortages in the future. As a result, USDA agencies competed against one another for veterinarians instead of following a departmentwide strategy to balance the needs of these agencies. In particular, the Animal and Plant Health Inspection Service (APHIS) was attracting veterinarians away from the Food Safety Inspection Service because the work at APHIS was more appealing, opportunities for advancement were greater, and the salaries were higher. Moreover, USDA was not fully aware of the status of its veterinarian workforce at its agencies and, therefore, could not strategically plan for future veterinarian needs. We recommended, among other things, that USDA conduct an assessment of its veterinarian workforce to identify current and future workforce needs while also taking into consideration training and employee development needs and that a governmentwide approach be considered to address shortcomings. In response, the Office of Personnel Management—whose mission is to ensure the federal government has an effective civilian workforce—and relevant federal agencies, including USDA, created an interagency forum and developed a strategic workforce plan to obtain a governmentwide understanding of the current status and future needs of the federal veterinarian workforce. This is a positive step, but more work remains. For example, USDA still needs to complete a departmentwide assessment of its veterinarian workforce and create shared solutions to agency problems, which according to a senior agency official, it plans to do by the end of July 2011. Moreover, steps are still necessary to understand the veterinarian workforce needed during a potential catastrophic event—whether a pandemic or an attack on the food supply. Rural economic development. Our past work indicates that in failing to find ways to collaborate more, USDA and the Small Business Administration (SBA) are missing opportunities to leverage each other’s unique strengths to more effectively promote rural economic development and that they may fail to use taxpayer dollars in the most efficient manner. For example, we reported in September 2008 that the main causes for limited agency collaboration between these agencies include few incentives to collaborate and an absence of reliable guidance on consistent and effective collaboration. We found that SBA and USDA appear to have taken actions to implement some collaborative practices, such as defining and articulating common outcomes, for some of their related programs. However, the agencies have offered little evidence so far that they have taken steps to develop compatible policies or procedures or to search for opportunities to leverage physical and administrative resources with their federal partners. Moreover, we found that most of the collaborative efforts performed by program staff in the field that we have been able to assess to date have occurred only on a case-by-case basis. As a result, it appears that USDA and SBA do not consistently monitor or evaluate these collaborative efforts in a way that allows them to identify areas for improvement. Genetically engineered (GE) crops. GE crops—crops that are engineered to resist pests or tolerate herbicides—are widespread in the United States and around the world. USDA, the Environmental Protection Agency (EPA), and the Food and Drug Administration (FDA) regulate GE crops to ensure that they are safe. However, critics of GE crops want them to be labeled as GE crops and kept separate from non-GE crops. Unauthorized releases of GE crops into food, animal feed, or the environment beyond farm fields have occurred, and it is likely that such incidents will occur again. As we reported in November 2008, USDA, EPA, and FDA routinely coordinate their oversight and regulation of GE crops in many respects but could improve their efforts. For example, the agencies do not have a coordinated program for monitoring the use of marketed GE crops to determine whether the spread of genetic traits is causing undesirable effects on the environment, non-GE segments of agriculture, or food safety—actions that the National Research Council and others have recommended. To help ensure that unintended consequences arising from the marketing of GE crops are detected and minimized, we recommended in 2008 that the agencies develop a coordinated strategy for monitoring marketed GE crops and use the results to inform their oversight of these crops. Such a strategy should adopt a risk-based approach to identify the types of marketed GE crops that warrant monitoring, such as those with the greatest potential for affecting the environment or non-GE segments of agriculture or those that might threaten food safety through the unintentional introduction of pharmaceutical or industrial compounds into the food supply. The strategy should also identify criteria for determining when monitoring is no longer needed. To date, the agencies have not implemented this recommendation. USDA must have sufficient internal management capacity to effectively and efficiently fulfill its multiple missions. As part of the new governmentwide framework created by GPRAMA, the Director of OMB is required to develop long-term goals to improve management functions across the government in various areas, and agencies, including USDA, are required to describe how their efforts contribute to these goals. Among these areas are (1) financial management, (2) human capital management, and (3) information technology. The following are examples, drawn from our work, of USDA programs where improvements are needed in these areas: Financial management. We reported in March 2011 that improper payment estimates for USDA have increased by about $1 billion—from approximately $4 billion to a little more than $5 billion from 2009 to 2010. Some USDA programs or activities experienced increases in improper payments, while others decreased. For example, the level of estimated improper payments associated with the Federal Crop Insurance Corporation Program Fund more than doubled during this time, from $205 million in fiscal year 2009 to $525 million in fiscal year 2010. On the other hand, in April 2011, we noted that USDA reported a decrease in estimated improper payments for the Marketing Assistance Loan program from $85 million to $35 million. USDA reported that corrective actions taken to reduce improper payments included providing additional training and instruction on improper payment control procedures, and integrating employees’ individual performance results related to reducing improper payments into their annual performance ratings. Our past work has also highlighted other areas where USDA needs to strengthen management controls to prevent improper payments. For example, we reported in October 2008 that USDA provided farm program payments to thousands of individuals with incomes exceeding income eligibility caps. We recommended that USDA work with the Internal Revenue Service to develop a system for verifying the income eligibility for all recipients of farm program payments, which the agencies subsequently did. We also reported in July 2007 that USDA paid $1.1 billion in such payments to more than 170,000 deceased individuals during the period 1999 through 2005. Because USDA generally was unaware that these individuals were deceased, it did not have assurance that these payments were proper. We made recommendations to address this problem and, in response, USDA revised and strengthened guidance to its field offices for reviewing the eligibility of these individuals’ estates to continue to receive payments. The agency also completed implementation of a data-matching and review process between its payment files and the Social Security Administration’s master file of deceased individuals to identify program payment recipients who are deceased. Human capital. We have also reported on issues related to problems in USDA’s civil rights program. For decades, there have been allegations of discrimination in USDA programs and in its workforce. Numerous federal reports have described serious weaknesses in USDA’s civil rights program—particularly in resolving discrimination complaints and in providing minority farmers with access to programs. In 2002, Congress authorized the position of Assistant Secretary for Civil Rights at USDA to provide leadership for resolving these long-standing problems. In October 2008, we reported that the Office of the Assistant Secretary for Civil Rights had not achieved its goal of preventing backlogs of complaints and that this goal was undermined by the office’s faulty reporting of, and disparities in, its data. Also, some steps the office took to speed up its work may have adversely affected the quality of that work. Because of these concerns, we recommended that the Secretary of Agriculture implement plans to improve how USDA resolves discrimination complaints and ensure the reliability of the office’s databases on customer and employee complaints. We also recommended that USDA obtain an independent legal examination of a sample of USDA’s prior investigations and decisions on civil rights complaints. In addition, we reported that the office’s strategic planning does not address key steps needed to ensure USDA provides fair and equitable services to all customers and upholds the civil rights of its employees. We further recommended that the Secretary of Agriculture develop a strategic plan for civil rights at USDA that unifies USDA’s departmental approach with that of the Office of the Assistant Secretary for Civil Rights and that is transparent about USDA’s efforts to address the concerns of stakeholders. In April 2009, we reported that difficulties persisted in the Office of the Assistant Secretary for Civil Rights in resolving discrimination complaints. As recently as March 2011, an extensive assessment of civil rights at USDA raised issues related to many of our 2008 recommendations and made recommendations consistent with them. This assessment included a total of 234 recommendations to help USDA improve its performance on civil rights issues. The administration has committed to giving priority attention to USDA’s civil rights problems, and the agency has pointed to progress made in recent years. However, USDA has been addressing allegations of discrimination for decades and receiving recommendations for improving its civil rights functions without achieving fundamental improvements. Information technology. In recent reports, we have raised concerns about the overall security of USDA’s computerized information systems. USDA relies on these systems to carry out its financial and mission-related operations. Effective information security controls are required to ensure that financial and sensitive information is adequately protected from inadvertent or deliberate misuse; fraudulent use; and improper disclosure, modification, or destruction. Ineffective controls can also impair the accuracy, completeness, and timeliness of information used by management. Our analysis of our reports and USDA’s Office of Inspector General (OIG) reports regarding the security of these systems shows that USDA has not consistently implemented effective controls, such as those intended to prevent, limit, and detect unauthorized access to its systems or manage the configuration of network devices to prevent unauthorized access and ensure system integrity. For example, in March and November 2010, we reported on the need for federal agencies, including USDA, to improve implementation of information security controls such as those for configuring desktop computers and wireless communication devices. The OIG identified information technology security as a significant management challenge for fiscal year 2010. The need for effective information security is further underscored by the evolving and growing cyber threats to federal systems and the dramatic increase in the number of security incidents reported by federal agencies, including USDA. From fiscal year 2007 to fiscal year 2010, the number of incidents reported by USDA to the U.S. Computer Emergency Readiness Team, based in the Department of Homeland Security, increased by more than 330 percent. In summary, as deliberations on reauthorizing the Farm Bill begin, GPRAMA provides USDA and Congress with new tools to identify and oversee the most important performance issues and program areas warranting review. In light of the nation’s long-term fiscal challenge, this reexamination of the contributions that federal programs, including USDA programs, make to achieving outcomes for the American people is critical. GAO stands ready to help Congress in its application of GPRAMA tools to its oversight of USDA programs and its deliberations on Farm Bill reauthorization. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this statement, please contact Lisa Shames, Director, Natural Resources and Environment, (202) 512-3841 or [email protected]. Key contributors to this statement were James R. Jones, Jr., Assistant Director; Kevin Bray; Gary Brown; Mallory Barg Bulman; Ross Campbell; Tom Cook; Larry Crosland; Mary Denigan-Macauley; Andrew Finkel; Steve Gaty; Sandra Kerr; Carol Kolarik, Kathy Larin; Benjamin Licht; Paula Moore; Ken Rupar; Linda Sanders; Carol Herrnstadt Shulman; Nico Sloss; Kiki Theodoropoulus; and Lisa Turner. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The current fiscal environment, ongoing deliberations for the next Farm Bill, and the public's expectations for a high-performing and efficient government underscore the need for the U.S. Department of Agriculture (USDA) to focus on program results and customer needs, work across organizational lines to help minimize any overlap and duplication, and build its internal capacity. USDA comprises 15 agencies in seven mission areas that are responsible for, among other things, assisting farmers and rural communities, overseeing meat and poultry safety, providing access to nutritious food for low-income families, and protecting the nation's forests. For fiscal year 2010, USDA estimated that its 15 agencies would have total outlays of $129 billion. This statement highlights examples from GAO's previous work that illustrate how USDA can address challenges it faces in three key areas: (1) the performance and accountability of USDA programs, (2) coordination within USDA and between USDA and other agencies to minimize duplication and overlap, and (3) the sufficiency of USDA management capacity. This statement is based on GAO's extensive body of work on USDA programs authorized under the Farm Bill and issued from September 2005 through May 2011. USDA must ensure that its programs are being implemented efficiently and services are being delivered effectively, which requires it to review the progress it and its agencies have made in achieving program goals and developing strategies to improve performance and accountability. GAO's work notes cases in which USDA programs have either met or fallen short of meeting program goals. In April 2010, GAO reported on domestic food assistance programs--an area where three federal agencies administered 18 programs consisting of more than $90 billion in spending in fiscal year 2010. GAO suggested that not enough is known about the effectiveness of these programs. Research GAO reviewed suggested that participation in seven USDA food assistance programs it examined, including four of the five largest, is associated with positive health and nutrition outcomes consistent with the programs' goals; these goals include raising the level of nutrition among low-income households, safeguarding the health and well-being of the nation's children, and strengthening the agriculture economy. Little, however, is known about the effectiveness of the remaining 11 programs--9 of which are USDA programs--because they have not been well studied. GAO suggested that USDA consider which of the lesser-studied programs need further research. To achieve its missions, USDA must effectively coordinate with many groups both within and outside the agency. GAO's work provides instances of where improving coordination within USDA or across agencies has contributed or could contribute to improved performance of USDA programs. For example, in September 2005, GAO reported on USDA's need to improve coordination, including information-sharing and communication, between its Risk Management Agency (RMA) and Farm Service Agency (FSA) on potential fraud, waste, and abuse in the federal crop insurance program. For example, FSA offices in nine states did not conduct any of the field inspections RMA requested of farmers' fields in cases of anomalous crop insurance losses or when farmers were suspected of poor farming practices in 1 or more of the years in GAO's review. Also, RMA did not share with FSA information on the nature of the suspected poor farming practices or the results of follow-up inspections. GAO recommended actions to both agencies to more effectively conduct field inspections. USDA must have sufficient internal management capacity in the areas of financial management, human capital management, and information technology to effectively and efficiently fulfill its multiple missions. GAO has reported on USDA programs where improvements are needed in these areas. For example, GAO reported in October 2008 that USDA provided farm program payments to thousands of individuals with incomes exceeding income eligibility caps. GAO recommended that USDA work with the Internal Revenue Service to develop a system for verifying the income eligibility for recipients of all farm program payments, which the agencies subsequently did.
|
According to DHS, the limitations in its human resources environment, which includes fragmented systems and duplicative and paper-based processes, were compromising the department’s ability to effectively and efficiently carry out its mission. For example, according to DHS, the department does not have information on all of its employees, which reduces its abilities to strategically manage its workforce and best deploy people in support of homeland security missions. Additionally, according to DHS, reporting and analyzing enterprise human capital data are currently time-consuming, labor-intensive, and challenging because the department’s data management largely consists of disconnected, standalone systems, with multiple data sources for the same content. To address these issues, in 2003, DHS initiated the HRIT investment, which is intended to consolidate, integrate, and modernize the department’s and its components’ human resources IT infrastructure. These components include U.S. Customs and Border Protection, the Federal Emergency Management Agency, the Federal Law Enforcement Training Center, U.S. Immigration and Customs Enforcement, the Transportation Security Administration, U.S. Citizenship and Immigration Services, the U.S. Coast Guard, and the U.S. Secret Service. HRIT is managed by DHS’s Human Capital Business Systems unit, which is within the Office of the Chief Human Capital Officer and has overall responsibility for HRIT. Additionally, the Office of the Chief Information Officer plays a key supporting role in the implementation of HRIT by reviewing headquarters’ and components’ human resources investments, identifying redundancies and efficiencies, and delivering and maintaining enterprise IT systems. From 2003 to 2010, DHS made limited progress on the HRIT investment, as reported by DHS’s Inspector General. This was due to, among other things, limited coordination with and commitment from DHS’s components. To address this problem, in 2010 the DHS Deputy Secretary issued a memorandum emphasizing that DHS’s wide variety of human resources processes and IT systems inhibited the ability to unify DHS and negatively impacted operating costs. Accordingly, the Deputy Secretary memorandum prohibited component spending on enhancements to existing human resources systems or acquisitions of new solutions, unless those expenditures were approved by the Offices of the Chief Human Capital Officer or Chief Information Officer. The memorandum also directed these offices to develop a department-wide human resources architecture. In 2011, in response to the Deputy Secretary’s direction, the department developed a strategic planning document referred to as the Human Capital Segment Architecture blueprint, which redefined the HRIT investment’s scope and implementation time frames. As part of this effort, DHS conducted a system inventory and determined that it had 422 human resources systems and applications, many of which were single- use solutions developed to respond to a small need or links to enable disparate systems to work together. DHS reported that these numerous, antiquated, and fragmented systems inhibited its ability to perform basic workforce management functions necessary to support mission critical programs. To address this issue, the blueprint articulated that HRIT would be comprised of 15 strategic improvement opportunity areas (e.g., enabling seamless, efficient, and transparent end-to-end hiring) and outlined 77 associated projects (e.g., deploying a department-wide hiring system) to implement these 15 opportunities. HRIT’s only ongoing program is called PALMS and is intended to fully address the Performance Management strategic improvement opportunity area and its three associated projects. PALMS is attempting to implement a commercial off-the-shelf software product that is to be provided as a service in order to enable, among other things, comprehensive enterprise-wide tracking, reporting, and analysis of employee learning and performance for DHS headquarters and its eight components. Specifically, PALMS is expected to deliver the following capabilities: Learning management. The learning management capabilities are intended to manage the life cycle of learning activities for all DHS employees and contractors. It is intended to, among other things, act as a gateway for accessing training at DHS and record training information when a user has completed a course. Additionally, it is expected to replace nine disparate learning management systems with one unified system. Performance management. The performance management capabilities are intended to move DHS’s existing primarily paper- based performance management processes into an electronic environment and capture performance-related information throughout the performance cycle (e.g., recording performance expectations discussed at the beginning of the rating period and performance ratings at the end of it). Each component is responsible for its own PALMS implementation project, and is expected to issue a task order using a blanket purchase agreement that was established in May 2013 with an estimated value of $95 million. The headquarters PALMS program management office is responsible for overseeing the implementation projects across the department. Additionally, the Office of the Chief Information Officer is the Component Acquisition Executive responsible for overseeing PALMS. In addition, according to DHS officials, as of September 2014, PALMS was expected to address part of our High Risk Series on strengthening DHS’s management functions. Specifically, PALMS is intended to address challenges in integrating employee training management across all the components, including centralizing training and consolidating training data into one system. DHS has made very limited progress in addressing the 15 strategic improvement opportunities and the 77 associated projects included in HRIT. According to the Human Capital Segment Architecture Blueprint, DHS planned to implement 14 of the 15 strategic improvement opportunities and 68 of the 77 associated projects by June 2015; and the remaining improvement opportunity and 9 associated projects by December 2016. However, as of November 2015, DHS had fully implemented only 1 of the strategic improvement opportunities, which included 2 associated projects. Table 1 summarizes the implementation status and planned completion dates of the strategic improvement opportunities—listed in the order of DHS’s assigned priority—as of November 2015. DHS has partially implemented five of the other strategic improvement opportunities, but it is unknown when they will be fully addressed. Further, HRIT officials stated that DHS has not yet started to work on the remaining nine improvement opportunities, and the officials did not know when they would be addressed. Additionally, DHS developed an HRIT strategic plan for fiscal years 2012 through 2016 that outlined the investment’s key goals and objectives, including reducing duplication and improving efficiencies in the department’s human resources processes and systems. The strategic plan identified, among other things, two performance metrics that were focused on reductions in the number of component-specific human resources IT services provided and increases in the number of department-wide HRIT services provided by the end of fiscal year 2016. However, DHS has also made limited progress in achieving these two performance targets. Figure 1 provides a summary of HRIT’s progress towards achieving its service delivery performance targets. Key causes for DHS’s lack of progress in implementing HRIT and its associated strategic improvement opportunities include unplanned resource changes and the lack of involvement of the HRIT executive steering committee. These causes are discussed in detail below: Unplanned resource changes. DHS elected to dedicate the vast majority of HRIT’s resources to implementing PALMS and addressing its problems, rather than initiating additional HRIT strategic improvement opportunities. Specifically, PALMS—which began in July 2012—experienced programmatic and technical challenges that led to years-long schedule delays. For example, while the PALMS system for headquarters was originally planned to be delivered by a vendor in December 2013, as of November 2015, the expected delivery date was delayed until the end of February 2016—an over 2-year delay. HRIT officials explained the decision to focus primarily on PALMS was due, in part, to the investment’s declining funding. However, in doing so, attention was concentrated on the immediate issues affecting PALMS and diverted from the longer-term HRIT mission. Lack of involvement of the HRIT executive steering committee. The HRIT executive steering committee—which is chaired by the department’s Under Secretary for Management and co-chaired by the Chief Information Officer and Chief Human Capital Officer—is intended to be the core oversight and advisory body for all DHS-wide matters related to human capital IT investments, expenditures, projects, and initiatives. In addition, according to the committee’s charter, the committee is to approve and provide guidance on the department’s mission, vision, and strategies for the HRIT program. However, the executive steering committee only met once from September 2013 through June 2015—in July 2014—and was minimally involved with HRIT for that almost 2 year period. It is important to note that DHS replaced its Chief Information Officer (the executive steering committee’s co-chair) in December 2013—during this gap in oversight. Also during this time period HRIT’s only ongoing program—PALMS—was experiencing significant problems, including schedule slippages and frequent turnover in its program manager position (i.e., PALMS had five different program managers during the time that the HRIT executive steering committee was minimally involved). As a result of the executive steering committee not meeting, key governance activities were not completed on HRIT. For example, the committee did not approve HRIT’s notional operational plan for fiscal years 2014 through 2019. Officials from the Offices of the Chief Human Capital Officer and Chief Information Officer attributed the lack of HRIT executive steering committee meetings and committee involvement in HRIT to the investment’s focus being only on the PALMS program to address its issues, as discussed earlier. However, by not regularly meeting and providing oversight during a time when a new co-chair for the executive steering committee assumed responsibility and PALMS was experiencing such problems, the committee’s guidance to the troubled program was limited. More recently, the HRIT executive steering committee met in June and October 2015, and officials from the Offices of the Chief Human Capital Officer and Chief Information Officer stated that the committee planned to meet quarterly going forward. However, while the committee’s charter specified that it meet on at least a monthly basis for the first year, the charter does not specify the frequency of meetings following that year. Furthermore, the committee’s charter has not been updated to reflect the increased frequency of these meetings. As a result of the limited progress in implementing HRIT, DHS is unaware of when critical weaknesses in the department’s human capital environment will be addressed, which is, among other things, impacting DHS’s ability to carry out its mission. For example, the end-to-end hiring strategic improvement opportunity (which has an unknown implementation date) was intended to streamline numerous systems and multiple hand-offs in order to more efficiently and effectively hire appropriately skilled personnel, thus enabling a quicker response to emergencies, catastrophic events, and threats. We recommended in our report that DHS’s Under Secretary for Management update the HRIT executive steering committee charter to establish the frequency with which the committee meetings are to be held, and ensure that the committee is consistently involved in overseeing and advising HRIT. DHS agreed with both of these recommendations and stated that the executive steering committee charter would be updated accordingly by the end of February 2016; and that by April 30, 2016, the Under Secretary plans to ensure that the committee is consistently involved in overseeing and advising HRIT. According to the GAO Schedule Assessment Guide, a key activity in effectively managing a program and ensuring progress is establishing and maintaining a schedule estimate. Specifically, a well maintained schedule enables programs to gauge progress, identify and resolve potential problems, and forecast dates for program activities and completion of the program. In August 2011, DHS established initiation and completion dates for each of the 15 strategic improvement opportunities within the Human Capital Segment Architecture Blueprint. Additionally, HRIT developed a slightly more detailed schedule for fiscal years 2014 through 2021 that updated planned completion dates for aspects of some strategic improvement opportunities, but not all. However, DHS did not update and maintain either schedule after they were developed. Specifically, neither schedule was updated to reflect that DHS did not implement 13 of the 15 improvement opportunities by their planned completion dates—several of which should have been implemented over 3 years ago. HRIT officials attributed the lack of schedule updates to the investment’s focus shifting to the PALMS program when it started experiencing significant schedule delays. Without developing and maintaining a current schedule showing when DHS plans to implement the strategic improvement opportunities, DHS and Congress will be limited in their ability to oversee and ensure DHS’s progress in implementing HRIT. We recommended that the department update and maintain a schedule estimate for when DHS plans to implement each of the strategic improvement opportunities. In response, DHS concurred with our recommendation and stated that, by April 30, 2016, the DHS Chief Information Officer will update and maintain a schedule estimate for each of the strategic improvement opportunities. The Office of Management and Budget (OMB) requires that agencies prepare total estimated life-cycle costs for IT investments. Program management best practices also stress that key activities in planning and managing a program include establishing a life-cycle cost estimate and tracking costs expended. A life-cycle cost estimate supports budgetary decisions and key decision points, and should include all costs for planning, procurement, and operations and maintenance of a program. Officials from the Office of the Chief Human Capital Officer stated that a draft life-cycle cost estimate for HRIT was developed, but that it was not completed or finalized because detailed projects plans for the associated projects had not been developed or approved. According to the Human Capital Segment Architecture blueprint, the Office of the Chief Human Capital Officer roughly estimated that implementing all of the projects could cost up to $120 million. However, the blueprint specified that this figure did not represent the life-cycle cost estimate; rather it was intended to be a preliminary estimate to initiate projects. Without a life-cycle cost estimate, DHS has limited information about how much it will cost to implement HRIT, which hinders the department’s ability to, among other things, make budgetary decisions and informed milestone review decisions. Accordingly, we recommended that DHS develop a complete life-cycle cost estimate for the implementation of the HRIT investment. DHS agreed with our recommendation and stated that, by June 30, 2016, the DHS Chief Information Officer will direct development of a complete life-cycle cost estimate for the implementation of HRIT’s strategic improvement opportunities. According to CMMI-ACQ and the PMBOK® Guide, programs should track program costs in order to effectively manage the program and make resource adjustments accordingly. In particular, tracking and monitoring costs enables a program to recognize variances from the plan in order to take corrective action and minimize risk. However, DHS has not tracked the total actual costs incurred on implementing HRIT across the enterprise to date. Specifically, while the investment received line item appropriations for fiscal years 2005 through 2015 which totaled at least $180 million, DHS was unable to provide all cost information on HRIT activities since it began in 2003, including all government-related activities and component costs that were financed through the working capital fund, which, according to DHS officials from multiple offices, were provided separately from the at least $180 million appropriated specifically to HRIT. Officials from the Office of the Chief Human Capital Officer attributed the lack of cost tracking to, among other things, the investment’s early reliance on contractors to track costs, and said that the costs were not well maintained nor centrally tracked, and included incomplete component-provided cost information. The components were also unable to provide us with complete information. Consequently, we recommended that the department document and track all costs, including components’ costs, associated with HRIT. DHS concurred and stated that, by October 31, 2016, the DHS Chief Information Officer will direct the HRIT investment to document and track all costs associated with HRIT. According to the HRIT executive steering committee’s charter, the Under Secretary for Management (as the chair of the committee) is to ensure that the department’s human resources IT business needs are met, as outlined in the blueprint. Additionally, according to the GPRA (Government Performance and Results Act) Modernization Act of 2010, agency strategic plans should be updated at least every 4 years. While this is a legal requirement for agency strategic plans (the Human Capital Segment Architecture blueprint does not fall under the category of an “agency strategic plan”), it is considered a best practice for other strategic planning documents, such as the blueprint. However, the department issued the blueprint in August 2011 (approximately 4.5 years ago) and has not updated it since. As a result, the department does not know whether the remaining 14 strategic improvement opportunities and associated projects that it has not fully implemented are still valid and reflective of DHS’s current priorities, and are appropriately prioritized based on current mission and business needs. Additionally, DHS does not know whether new or emerging opportunities or business needs need to be addressed. Officials stated that the department is still committed to implementing the blueprint, but agreed that it should be re-evaluated. To this end, following a meeting we had with DHS’s Under Secretary for Management in October 2015, in which we expressed concern about HRIT’s lack of progress, officials from the Offices of the Chief Human Capital Officer and Chief Information Officer stated that HRIT was asked by the Deputy Under Secretary for Management in late October 2015 to re-evaluate the blueprint’s strategic improvement opportunities and to determine the way forward for those improvement opportunities and the HRIT investment. However, officials did not know when this re-evaluation and a determination for how to move forward with HRIT would occur, or be completed. Further, according to officials from the Office of the Chief Information Officer, DHS has not updated its complete systems inventory since it was originally developed as part of the blueprint effort, in response to a 2010 Office of Inspector General report that stated that DHS had not identified all human resource systems at the components. This report also emphasized that without an accurate inventory of human resource systems, DHS cannot determine whether components are using redundant systems. Moreover, the officials from the Office of the Chief Information Officer were unable to identify whether and how its inventory of human resources systems had changed. Until DHS establishes time frames for re-evaluating the blueprint to reflect DHS’s HRIT current priorities and updates its human resources system inventory, the department will be limited in addressing the inefficient human resources environment that has plagued the department since it was first created. As a result, we recommended that DHS establish time frames for re-evaluating the strategic improvement opportunities and associated projects in the blueprint and determining how to move forward with HRIT; evaluate the opportunities and projects to determine whether the goals of the blueprint are still valid and update the blueprint accordingly; and update and maintain the system inventory. DHS agreed with these recommendations and expects to address them by February 2016, April 2016, and October 2016, respectively. As previously mentioned, PALMS is intended to provide an enterprise- wide system that offers performance management capabilities, as well as learning management capabilities to headquarters and each of its components. As such, DHS’s headquarters PALMS program management office and the components estimate that, if fully implemented across DHS, PALMS’s learning management capabilities would be used by approximately 309,360 users, and its performance management capabilities would be used by at least 217,758 users. However, there is uncertainty about whether the PALMS system will be used enterprise-wide to accomplish these goals. Specifically, as of November 2015, of the eight components and headquarters, five are planning to implement both PALMS’s learning and performance management capabilities (three of which have already implemented the learning management capabilities—discussed later), two are planning to implement only the learning management capabilities, and two components are not currently planning to implement either of these PALMS capabilities, as illustrated in figure 2. Officials from the Federal Emergency Management Agency, U.S. Immigration and Customs Enforcement, the Transportation Security Administration, and the U.S. Coast Guard cited various reasons for why their components were not currently planning to fully implement PALMS, which include: Federal Emergency Management Agency and U.S. Immigration and Customs Enforcement officials stated that they were not currently planning to implement the performance management capabilities because the program had experienced critical deficiencies in meeting the performance management-related requirements. Federal Emergency Management Agency officials stated that they do not plan to make a decision on whether they will or will not implement these performance management capabilities until the vendor can demonstrate that the system meets the Agency’s needs; as such, these officials were unable to specify a date for when they plan to make that decision. U.S. Immigration and Customs Enforcement officials also stated that they do not plan to implement the performance management capabilities of PALMS until the vendor can demonstrate that all requirements have been met. PALMS headquarters officials expected all requirements to be met by the vendor by the end of February 2016. Transportation Security Administration officials stated that they were waiting on the results of their fit-gap assessment of PALMS before determining whether, from a cost and technical perspective, the Administration could commit to implementing the learning and performance management capabilities of PALMS. Administration officials expected the fit-gap assessment to be completed by the end of March 2016. U.S. Coast Guard officials stated that, based on the PALMS schedule delays experienced to date, they have little confidence that the PALMS vendor could meet the component’s unique business requirements prior to the 2018 expiration of the vendor’s blanket purchase agreement. Additionally, these officials stated that the system would not meet all of the Coast Guard’s learning management requirements, and likely would not fully meet the performance management requirements for all of its military components. Due to the component’s uncertainty, the officials were unable to specify when they plan to ultimately decide on whether they will implement one or both aspects of PALMS. As a result, it is unlikely that the department will meet its goal of being an enterprise-wide system. Specifically, as of November 2015, the components estimate 179,360 users will use the learning management capabilities of PALMS (not the 309,360 expected, if fully implemented), and 123,200 users will use the performance management capabilities of PALMS (not the 217,758 expected, if fully implemented). Of the seven components and headquarters that are currently planning to implement the learning and/or performance management aspects of PALMS, as of December 2015, three have completed their implementation efforts of the learning management capabilities and deployed these capabilities to users (deployed to U.S. Customs and Border Protection in July 2015, headquarters in October 2015, and the Federal Law Enforcement Training Center in December 2015); two have initiated their implementation efforts on one or both aspects, but not completed them; and two have not yet initiated any implementation efforts. As a result, PALMS’s current trajectory is putting the department at risk of not meeting its goals to perform efficient, accurate, and comprehensive tracking and reporting of training and performance management data across the enterprise; and consolidating its nine learning management systems down to one. Accordingly, until the Federal Emergency Management Agency decides whether it will implement the performance management capabilities of PALMS and the Coast Guard decides whether it will implement the learning and/or performance management capabilities of PALMS, the department is at risk of implementing a solution that does not fully address its problems. Moreover, until DHS determines an alternative approach if one or both aspects of PALMS is deemed not feasible for U.S. Immigration and Customs Enforcement, the Transportation Security Administration, the Federal Emergency Management Agency, or the Coast Guard, the department is at risk of not meeting its goal to enable enterprise-wide tracking and reporting of employee learning and performance management. We recommended that the department establish a time frame for deciding whether PALMS will be fully deployed at the Federal Emergency Management Agency and the Coast Guard, and determine an alternative approach if the learning and/or performance management capabilities of PALMS are deemed not feasible for the Federal Emergency Management Agency, U.S. Immigration and Customs Enforcement, the Transportation Security Administration, or the Coast Guard. DHS concurred with our recommendation and stated that the PALMS program office will establish a time frame for a deployment decision of PALMS for these components. According to GAO’s Cost Estimating and Assessment Guide, having a complete life-cycle cost estimate is a critical element in the budgeting process that helps decision makers to evaluate resource requirements at milestones and other important decision points. Additionally, a comprehensive cost estimate should include both government and contractor costs of the program over its full life cycle, from inception of the program through design, development, deployment, and operation and maintenance to retirement of the program. However, according to PALMS program management office officials, they did not develop a life-cycle cost estimate for PALMS. In 2012, DHS developed an independent government cost estimate to determine the contractor-related costs to implement the PALMS system across the department (estimated to be approximately $95 million); however, this estimate was not comprehensive because it did not include government- related costs. PALMS program office officials stated that PALMS did not develop a life-cycle cost estimate because the program is a Level 3 acquisition program and DHS does not require such an estimate for a Level 3 program. However, while DHS acquisition policy does not require a life-cycle cost estimate for a program of this size, we maintain that such an estimate should be prepared because of the program’s risk and troubled history. Without developing a comprehensive life-cycle cost estimate, DHS is limited in making future budget decisions related to PALMS. Accordingly, we recommended that the department develop a comprehensive life-cycle cost estimate, including all government and contractor costs, for the PALMS program. DHS concurred with our recommendation and stated that, by May 30, 2016, the PALMS program office will update the program’s cost estimate to include all government and contractor costs. As described in GAO’s Schedule Assessment Guide, a program’s integrated master schedule is a comprehensive plan of all government and contractor work that must be performed to successfully complete the program. Additionally, such a schedule helps manage program schedule dependencies. Best practices for developing and maintaining this schedule include, among other things, capturing all activities needed to do the work and reviewing the schedule after each update to ensure the schedule is complete and accurate. While DHS had developed an integrated master schedule with the PALMS vendor, it did not appropriately maintain this schedule. Specifically, the program’s schedule was incomplete and inaccurate. For example, while DHS’s original August 2012 schedule planned to fully deploy both the learning and performance management capabilities in one release at each component by March 2015, the program’s September 2015 schedule did not reflect the significant change in PALMS’s deployment strategy and time frames. Specifically, the program now plans to deploy the learning management capabilities first and the performance management capabilities separately and incrementally to headquarters and the components. However, the September 2015 schedule reflected the deployment-related milestones (per component) for only the learning management capabilities and did not include the deployment-related milestones for the performance management capabilities. In September 2015, PALMS officials stated that the deployments related to performance management were not reflected in the program’s schedule because the components had not yet determined when they would deploy these capabilities. Since then two components have determined their planned dates for deploying these capabilities, but seven (including headquarters) remain unknown. As a result, the program does not know when PALMS will be fully implemented at all components with all capabilities. Moreover, the schedule did not include all government-specific activities, including tasks for employee union activities (such as notifying employee unions and bargaining with them, where necessary) related to the proposed implementation of the performance management capabilities. Without developing and maintaining a single comprehensive schedule that fully integrates all government and contractor activities, and includes all planned deployment milestones related to performance management, DHS is limited in monitoring and overseeing the implementation of PALMS, and managing the dependencies between program tasks and milestones to ensure that it delivers capabilities when expected. Consequently, we recommended that DHS develop and maintain a single comprehensive schedule. DHS agreed and stated that, by May 30, 2016, the PALMS program office will develop and maintain a single, comprehensive schedule that includes all government and contractor activities, and all planned milestones related to deploying the PALMS system’s performance management capabilities. According to CMMI-ACQ and the PMBOK® Guide, a key activity for tracking a program’s performance is monitoring the project’s costs by comparing actual costs to the cost estimate. The PALMS program management office—which is responsible for overseeing the PALMS implementation projects across DHS, including all of its components— monitored task order expenditures on a monthly basis. As of December 2015, DHS officials reported that they had issued approximately $18 million in task orders to the vendor. However, the program management office officials stated that they were not monitoring the government-related costs associated with each of the PALMS implementations. The officials stated that they were not tracking government-related implementation costs at headquarters because many of the headquarters program officials concurrently work on other acquisition projects and these officials are not required to track the amount of time spent working specifically on PALMS. The officials also said that they were not monitoring the government-related costs for each of the component PALMS implementation projects because it would be difficult to obtain and verify the cost data provided by the components. We acknowledge the department’s difficulties associated with obtaining and verifying component cost data; however, monitoring the program’s costs is essential to keeping costs on track and alerting management of potential cost overruns. As such, we recommended that DHS track and monitor all costs associated with the PALMS program. DHS concurred with our recommendation and stated that it plans to have the PALMS program office track and monitor all costs associated with the PALMS program by March 30, 2016. In summary, although the HRIT investment was initiated about 12 years ago with the intent to consolidate, integrate, and modernize the department’s human resources IT infrastructure, DHS has made very limited progress in achieving these goals. HRIT’s minimally involved executive steering committee during a time when significant problems were occurring was a key factor in the lack of progress. Moreover, DHS’s lack of use of program management best practices for HRIT and PALMS also contributed to the neglect this investment has experienced. Implementing our recommendations is critical to the department addressing its fragmented and duplicative human resources environment that is hindering the department’s ability to efficiently and effectively perform its mission. Chairman Perry, Ranking Member Watson Coleman, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. If you have any questions concerning this statement, please contact Carol Cha, Director, Information Technology Acquisition Management Issues, at (202) 512-4456 or [email protected]. Other individuals who made key contributions include Rebecca Gambler, Director; Shannin O’Neill, Assistant Director; Christopher Businsky; Rebecca Eyler; Javier Irizarry; Emily Kuhn; and David Lysy. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
DHS's human resources information technology environment includes fragmented systems, duplicative and paper-based processes, and little uniformity of data management practices, which according to DHS, are compromising the department's ability to effectively carry out its mission. DHS initiated HRIT in 2003 to consolidate, integrate, and modernize DHS's human resources information technology infrastructure. In 2011, DHS redefined HRIT's scope and implementation time frames. This statement summarizes GAO's report that is being released at today's hearing (GAO-16-253) on, among other objectives, the progress DHS has made in implementing the HRIT investment and how effectively it managed the investment. The Department of Homeland Security (DHS) has made very little progress in implementing its Human Resources Information Technology (HRIT) investment over the last several years. This investment includes 15 improvement areas; as of November 2015, DHS had fully implemented only 1. HRIT's limited progress was due in part to the lack of involvement of its executive steering committee—the investment's core oversight and advisory body. Specifically, this committee was minimally involved with HRIT, such as meeting only once during a nearly 2-year period when major problems were occurring, including schedule delays and the lack of a life-cycle cost estimate. As a result, key governance activities, such as approval of HRIT's operational plan, were not completed. Officials acknowledge that HRIT should be re-evaluated. They have met to discuss it; however, specific actions and time frames have not yet been determined. Until DHS takes key actions to manage this neglected investment, it is unknown when its human capital management weaknesses will be addressed. In its report that is being released today, GAO made 14 recommendations to DHS to, among other things, address HRIT's poor progress and ineffective management. For example, GAO recommended that the HRIT executive steering committee be consistently involved in overseeing and advising the investment, and that DHS establish time frames for re-evaluating HRIT and develop a complete life-cycle cost estimate for the investment. DHS concurred with the 14 recommendations and provided estimated completion dates for implementing each of them.
|
Ninety percent of all natural disasters in the United States involve flooding. Although homeowner insurance policies typically cover damage and losses from fire or theft and often from wind-driven rain, they do not cover flood damage because private insurance companies are largely unwilling to bear the economic risks associated with the potentially catastrophic impact of flooding. To provide some insurance protection for flood victims, as well as incentives for communities to adopt and enforce floodplain management regulations to reduce future flood damage, and to reduce the amount of federal disaster assistance payments, federal law established the NFIP in 1968. The legislative history of the National Flood Insurance Act recognized that insurance for existing buildings constructed before the NFIP was established would be extremely expensive because most of them were flood prone and did not comply with NFIP floodplain management standards that went into effect after they were built. The authorizing legislation included provisions for subsidized insurance rates to be made available for policies covering certain structures to encourage communities to join the program. Under the NFIP, the properties are generally referred to as Pre-FIRM (Flood Insurance Rate Map) buildings. As shown in figure 1, the NFIP has grown from about 1.5 million policies in 1978 to 5.1 million policies in July 2006. More than 20,100 communities nationwide participate in the NFIP. To participate in the program, communities agree to enforce regulations for land use and new construction in high-risk flood zones. In exchange, the NFIP studies and maps flood risks and makes federally backed flood insurance available to homeowners and other property owners. The maps identify special high-risk flood hazard areas, also known as the 100-year floodplain. These areas have a 1 percent chance of being flooded in any given year or at least a 26 percent chance of being flooded over the 30-year life of a typical home mortgage. Property owners in the special high-risk flood hazard areas whose communities participate in the NFIP and who have mortgages from federally regulated lenders are required to purchase flood insurance on their dwellings for at least the outstanding amount of their mortgages up to the maximum policy limit of $250,000. Optional lower-cost coverage is also available under the NFIP to protect homes in areas of low to moderate risk. To insure furniture and other personal property items against flood damage, homeowners may purchase separate NFIP personal property coverage. Maximum coverage amounts under the NFIP are $250,000 for dwellings and $100,000 for personal property. Accurate flood maps that identify the areas at greatest risk of flooding are the foundation of the NFIP. Flood maps must be periodically updated to assess and map changes in the boundaries of floodplains that result from community growth, development, erosion, and other factors that affect the boundaries of areas at risk of flooding. FEMA is in the midst of a multi- year effort to update the nation’s flood maps at a cost in excess of $1 billion. The maps are principally used by (1) more than 20,100 communities participating in the NFIP to adopt and enforce the program’s minimum building standards for new construction within the maps’ identified flood plains, (2) FEMA to develop flood insurance policy rates based on flood risk, and (3) federally regulated mortgage lenders to identify those property owners who are required to purchase federal flood insurance. The work of selling, servicing, and adjusting NFIP claims is carried out by thousands of private sector insurance agents and adjusters who work independently or are employed by insurance companies, adjusting firms, or designated subcontractors under the oversight and management of FEMA within the Department of Homeland Security. According to FEMA, about 95 percent of the NFIP policies in force are written by insurance agents who represent 88 private insurance companies that are paid fees for performing administrative services for the NFIP but do not have exposure for claims losses. The companies, called write-your-own companies, receive an expense allowance from FEMA of about one-third of the premium amounts for their services and are required to remit premium income in excess of this allowance to the National Flood Insurance Fund. The write-your-own companies also receive a percentage fee—about 3.3 percent of the incurred loss—for adjusting and settling claims. To settle claims, including those from Hurricanes Katrina and Rita, insurance companies work with certified flood adjusters. When flood losses are reported, the write-your-own companies assign a flood adjuster to assess damages. Flood adjusters may be independent or employed by an insurance or adjusting company. These adjusters are responsible for assessing damage, estimating losses, and submitting required reports, work sheets, and photographs to the insurance company, where the claim is reviewed and, if approved, processed for payment. Adjusters determine the price for repairs by reviewing estimates of costs prepared by policyholders and their contractors, consulting pricing software, and checking local prices for materials. Adjusters are paid for their services according to a standard fee schedule that is paid in addition to the fees paid to the insurance companies. Adjusters who work for an adjusting company share the fees with the company in exchange for adjusting assignments and administrative support. For example, for the average claims settlement amount for Hurricane Katrina, $94,803, the NFIP fee schedule authorizes payment of 3 percent of the claim amount, or $2,844, for adjusting services. For claims adjusted under the expedited claims processing procedures that were introduced after Hurricane Katrina, FEMA authorized payment of $750 for each claim plus an additional $400 if a site visit was required later in the claims adjustment process. Among the requirements for certification as a claims adjuster for the NFIP are at least 4 consecutive years of full-time property loss adjusting experience, attendance each year at an NFIP adjuster workshop, and demonstration of knowledge of the standard flood insurance policy by passing a written examination. In 2002, FEMA modified the minimum experience requirement to allow adjusters who do not have the requisite experience to work with a seasoned flood adjuster until the write-your- own company determines that the adjuster is able to work independently. Claimants who have questions or concerns about actions taken to resolve their claims have several avenues of recourse. Claims amounts may be adjusted after the initial settlement is paid if claimants submit documentation that some costs to repair or replace damaged items were higher than estimated. If a claimant is not satisfied with the adjuster’s answers or does not agree with decisions, the claimant or the write-your- own company can request FEMA’s program contractor for assistance in reaching a resolution by conducting a special assistance reinspection of the claim. Also, under provisions of the Flood Insurance Reform Act of 2004, claimants may contact FEMA directly to resolve concerns that were not addressed through the other channels. Finally, claimants may bring a claim in federal district court against the insurer. About 40 FEMA employees, assisted by about 170 contractor employees, are responsible for managing the NFIP. Management responsibilities include establishing and updating NFIP regulations, administering the National Flood Insurance Fund, analyzing data to actuarially determine flood insurance rates and premiums, and providing training to insurance agents and adjusters. In addition, FEMA and its program contractor are responsible for monitoring and overseeing the quality of the performance of the write-your-own companies to ensure that the NFIP is administered properly (i.e., appropriate claims settlements are made and program objectives are achieved). Hurricane Katrina, followed closely by Hurricane Rita, had a far-reaching impact on the financial solvency of the NFIP. By all measures, the flood losses were unprecedented in the history of the NFIP. FEMA projects that when all claims are settled, claims from NFIP policyholders who suffered flood damage from Hurricanes Katrina and Rita will total more than $20 billion. In contrast, the NFIP reports that from its inception in 1968 until August 2005, it paid a cumulative total of about $14.6 billion in claims. In the two largest single flood events prior to Hurricane Katrina, the NFIP reports that it processed a little more than 30,000 claims after a Louisiana flood in 1995 and Tropical Storm Allison in 2001. Figure 2 illustrates the magnitude of the flood losses in 2005 compared to losses over the history of the NFIP. Not only were the total cost and number of Hurricane Katrina and Rita claims far greater than in prior flood events, the amount paid per loss was also greater. As shown in figure 3, the average amounts paid per claim for Hurricanes Katrina and Rita flood damages—about $94,800 and $46,000, respectively—were much larger than average claims amounts reported as paid in the 3 prior years. Average paid losses for Hurricane Katrina were about three times the average paid losses reported by the NFIP for damage from flood events in 2004, including Hurricanes Charley, Ivan, Frances, and Jeanne in Florida and other East Coast and Gulf Coast states. As a result of the number and amount of claims for damages from the 2005 hurricane season and particularly Hurricane Katrina, losses to be paid far exceeded the NFIP’s existing borrowing authority with the U.S. Treasury. The borrowing authority was subsequently increased from $1.5 billion before Hurricane Katrina to $18.5 billion in November 2005, and then to $20.8 billion in March 2006 to pay claims and expenses from Hurricane Katrina and other 2005 hurricanes. As of September 30, 2006, FEMA’s debt to the Treasury was $16.9 billion. As we reported in January 2006, it is unlikely that FEMA will be able to repay a debt of this size and pay future claims in a program that generated premium income of about $2 billion in fiscal year 2005. To the extent possible, the NFIP is designed to pay operating expenses and flood insurance claims with premiums collected on flood insurance policies rather than by tax dollars. However, by design, the program is not actuarially sound because federal law authorized subsidized insurance rates to be made available for policies covering some properties to encourage communities to join the program. As a result, the program does not collect sufficient premium income to build reserves to meet the long- term future expected flood losses. In November 2006, legislation was pending in both houses of Congress to reform the NFIP. A Senate provision would forgive the NFIP debt and bills in both houses had provisions to improve the financial solvency of the program and reduce the extent of the federal government’s exposure for losses in catastrophic loss years. For example, proposed legislation in both the Senate and the House of Representatives contain provisions that would allow premium increases of up to 15 percent annually on NFIP policies, up from the current cap of 10 percent on premium increases. Additionally, legislation in both houses of Congress would phase out subsidized rates for some properties built before flood insurance rate maps were put into effect in their communities, including nonresidential properties and those that are not primary residences. However, none of the proposals, if enacted, would make changes to the NFIP that would result in collecting enough premium income to cover losses for any future flood events of the magnitude of Hurricane Katrina. Until the 2004 hurricane season, FEMA had been generally successful in keeping the NFIP on sound financial footing, exercising its borrowing authority three times in the last decade when losses exceeded available fund balances. In each instance, FEMA repaid the funds with interest. According to FEMA officials, as of August 31, 2005, FEMA had outstanding borrowing of $225 million with cash on hand totaling $289 million. FEMA had substantially repaid the borrowing it had undertaken to pay losses incurred for the 2004 hurricane season, which, until Hurricane Katrina struck, had been the worst hurricane season on record for the NFIP. FEMA’s current debt with the Treasury is almost entirely for payment of claims from Hurricanes Katrina and Rita and other flood events that occurred in 2005. As shown in figure 4, the majority of NFIP claims for flood damage from Hurricane Katrina were in Louisiana, and a large portion of the Louisiana Hurricane Katrina claims were in New Orleans. As of May 2006, the NFIP had paid about 162,000 claims for losses from flood damage from Hurricane Katrina in Alabama, Florida, Louisiana, and Mississippi. About 135,000 of these losses (about 83 percent) were in Louisiana. As of July 2006, about 83,500 Louisiana claims were made for property damage in the New Orleans area, where flood waters breached levees and floodwalls. Almost 9,000 additional NFIP claims, over 7,000 of them from Louisiana, were paid as a result of losses from Hurricane Rita. Tables 1 and 2 provide a state-by-state breakdown of the number of paid losses, the number of losses paid at policy limits, and the average payment amounts per loss for Hurricanes Katrina and Rita, through May 2006. The majority of Hurricane Katrina and Rita paid losses were for flood damage to residences. About 96 percent of Hurricane Katrina paid losses and about 94 percent of Hurricane Rita paid losses were for residential properties including condominiums, while 4 percent and 6 percent of the paid losses, respectively, were for nonresidential properties including businesses and public buildings (i.e., schools and churches). As shown in figures 5 and 6, the majority of paid losses for noncondominium residential properties were for principal residences. About 16 percent of paid claims for residences damaged by Hurricane Katrina were nonprincipal residences, which include secondary homes. About 18 percent of paid losses for residences damaged by Hurricane Rita were for nonprincipal residences. See appendix II for detailed information on principal and nonprincipal residential paid losses by state. Most of the paid losses were for properties located within the special flood hazard areas where homeowners with mortgages from federally regulated lenders are required to purchase flood insurance on their dwellings for at least the amount of their outstanding mortgage. As shown in figure 7, about 78 percent of the paid losses for Hurricane Katrina through May 2006, were in special flood hazard areas subject to flooding or flooding and wave action where purchase of flood insurance is mandatory on properties with mortgages from federally regulated lenders. However, claims were also paid on 36,325 losses (about 22 percent) on properties outside of the special flood hazard areas where purchase of flood insurance is optional. As shown in figure 8, of 8,851 paid loses for Hurricane Rita through May 2006, 6,746 (about 76 percent) were in special flood hazard areas. While homeowners who live in specially designated flood hazard areas are required to purchase NFIP insurance on their dwellings at least for the amount of any federally regulated mortgage, the purchase of coverage for the home’s contents, including furniture and personal property, is optional and may be purchased separately. NFIP policyholders who live in, for example, rental units, cooperatives, or condominium buildings may elect to purchase NFIP policies for contents coverage only. Figures 9 and 10 show that most paid Hurricane Katrina and Hurricane Rita residential losses were for both dwellings and contents. See appendix III for detailed information on residential paid losses for dwellings and contents by state. Building only (3,620) Building and contents (4,264) The magnitude and severity of the damages from Hurricane Katrina closely followed by Hurricane Rita presented FEMA and its private sector NFIP partners with challenges to accurately process a record number of flood claims in a timely manner under adverse conditions and address other needs of NFIP claimants and communities. “A month after Hurricane Katrina, our adjusters couldn’t get to flooded properties because roadways were blocked by debris and houses were contaminated by flood waters. In many cases, adjusters could not even identify the houses they were trying to inspect because street signs were washed away and houses were piled on top on one another as a result of the storm surge. Adjusters went to some addresses only to find nothing left standing but the foundation. Making contact with claimants was in some cases impossible because they were scattered across the country and relocating frequently from one temporary address to another. In many cases, the documentation we normally use to adjust claims no longer existed. Claimants’ files at local insurance agencies, mortgage records, and other documents were gone in the flood.” According to a representative of FEMA’s program contractor on-site in Hammond, Louisiana, about 8,000 adjusters were working on claims from Hurricanes Katrina and Rita at the high point, from October through December 2005. An owner of a firm that specializes in insurance claims adjustments for catastrophes described the problems he faced in getting adjusters to the affected areas. The majority of adjusters who worked under contract for this firm were staying in Mobile, Alabama, a 2½- to 3-hour drive from the New Orleans area. Highways were jammed, and lodging and fuel were in short supply. The business owner said that he bought more than 30 houses in the Mobile area, several tanker trucks of oil, and a gas station to meet adjusters’ housing and transportation needs. Figure 11 shows photographs of flooded neighborhoods that illustrate some of the challenges faced by flood adjusters in getting to and identifying the heavily damaged houses they were assigned to inspect. Despite the large volume of claims and adverse conditions for settling them, the NFIP was successful in closing 92 percent of NFIP claims for Hurricane Katrina and 86 percent for Hurricane Rita by March 2006, about 7½ months after the storms struck. By May 2006, about 9 months after the storms, FEMA reported that over 95 percent of the Gulf Coast claims were closed. These time frames for closing claims are comparable to time frames for closing claims in other, smaller flood events. For example, in Florida, where the largest number of claims for flood damage were filed in the 2004 hurricane season, the NFIP closed about 88 percent of the 33,888 claims from Hurricanes Charley, Ivan, Frances, and Jeanne within 7 months and about 92 percent within 9 months. Concerns from claimants about actions taken to settle their claims were relatively few in relation to the large number of claims filed. For example, as of April 2006, 13 appeals had been filed by claimants related to settlements of their claims for Hurricane Katrina damage, and no appeals had been filed on claims for damage from Hurricane Rita. In February 2006, FEMA’s program contractor had received about 500 requests for special assist reinspections. These requests occur when claimants and insurance companies do not agree on aspects of the claims adjustment and ask for assistance in reaching a resolution. FEMA was not able to provide comparison data from prior years or updated information on the number of appeals filed after April 2006 and the number of special assist reinspections for Hurricanes Katrina and Rita after February 2006. To try to assist NFIP policyholders despite many obstacles, FEMA approved expedited claims processing methods that were unique to Hurricanes Katrina and Rita. In some circumstances, claims could be adjusted without site visits by certified flood claims adjusters. For flooding from Lake Pontchartrain in New Orleans caused by the failure of the levees, FEMA allowed the use of aerial and satellite photography and flood depth data to identify structures that had been severely affected. If data on the depth and duration of the water in the building showed that it was likely that covered damage exceeded policy limits, the claim could be settled without a site visit by a claims adjuster. Similarly, for some other losses in Louisiana, Alabama and Mississippi, FEMA authorized claims adjustments without site visits where structures were washed off of their foundations by flood waters and square foot measurements of the dwellings were known. While FEMA authorized the use of these approaches, the write-your-own companies made the decision on whether they wished to use expedited processes to adjust claims. In addition, FEMA authorized the use of a square foot measurement methodology for homes that had been flooded off of their slabs, pilings, or posts. In those instances, damages could be calculated by a certified flood adjuster based on measurements of room dimensions and classification of building materials as high, medium, or low level, rather than a room-by-room, item- by-item calculation of loss amounts. FEMA authorized payments to its private insurance company partners of $750 per expedited claim adjustment—a lower fee than would have been paid for a more time- consuming room-by-room, line-item-by-line-item visual assessment of flood damage. According to the FEMA director of NFIP claims, about 17,200 claims for damage, mostly from Hurricane Katrina (about 11 percent of all Hurricane Katrina claims), were adjusted using expedited procedures. Although a relatively small number of claims were adjusted using expedited processes, officials of FEMA, its program contractor, representatives of two of the five private insurance companies we interviewed, and a flood claims adjusting service official said that having the option to do some expedited adjustments enabled the NFIP to keep up with demands for adjuster services and close the claims as quickly as it did. Representatives of the three insurance companies we visited that did not use expedited processes to a significant extent said they did not do so for reasons including concerns over the accuracy of flood depth data, delays in the availability of flood depth data, and because their companies did not write homeowners’ policies on the dwellings in question, they lacked necessary information (i.e., square foot measurements of the home) that were needed to process claims without site inspections. According to the FEMA director of NFIP claims, two large write-your-own insurance companies developed models that were approved by FEMA for use in making square foot estimates of damage for some claims from Hurricanes Katrina and Rita instead of sending certified flood adjusters to the sites to assess and document damage room by room and item by item. According to the FEMA official, the square foot models paid claims based on the square footage of the property and a classification of the building materials as low, middle, or high level. For example, claims paid on a flooded high-level kitchen would be more than payments for a middle-level kitchen of the same square footage. If one or two high-end items were in a middle-level home (i.e., a custom front door or exotic hardwood floors), an adjustment to the middle-end rate would be made for those specific items. According to the official, the NFIP had experimented briefly with a much less sophisticated approach to square foot estimating about 10 years ago but had not used any form of claims adjusting since that time other than the traditional approach of sending a certified flood adjuster to the site to assess damage and estimate losses with required reports, work sheets, and photographs to document damage room by room and line item by line item. The director of NFIP claims said that FEMA did not track the number of estimates done using the square foot method. He said that FEMA plans to examine the accuracy of the models carefully and consider using them for other catastrophic flooding events in the future. Because usage of the square foot method by the two companies with approved models was not carefully tracked during Hurricanes Katrina and Rita, FEMA paid the same fee for square foot adjustments as it did for regular line-item-by-line-item adjustments that took longer to perform and required more extensive documentation. However, the director of NFIP claims said that if the square foot methodology is approved for future use, the fee schedule paid for these adjustments would probably be lower than the current schedule for regular claims adjustments, with a resulting savings for the NFIP. In addition to approving expedited and square foot claims adjusting methods, FEMA took several other actions to expedite claims adjustments and meet the needs of claimants in the aftermath of Hurricanes Katrina and Rita. These were actions that, according to officials, FEMA had also used to a more limited extent in prior large flood events. Specifically, FEMA waived the requirement that property owners furnish proof of loss statements that list their losses for all Hurricane Katrina and Rita claims, allowed telephone adjustments for some claims below $25,000, established special toll-free telephone lines to assist policyholders who had questions about filing claims, liberalized adjuster training requirements to deploy more adjusters to flood-damaged areas, and authorized insurance companies and independent flood adjusting firms to use adjusters who did not meet FEMA’s minimum flood certification requirements provided that they worked under the direction of seasoned adjusters until the company certified that they were trained. As part of its floodplain management strategy, FEMA policies encourage the elevation or removal of damaged properties from the floodplain. In addition to paying claims for flood damage, NFIP policies pay up to $30,000 to owners of substantially damaged or repetitive loss properties for the cost of taking mitigation actions such as elevation, floodproofing, relocation or demolition, in order to comply with state or local floodplain management laws or ordinances. The payments are made under the increased cost of compliance (ICC) coverage of the standard flood insurance policy. As a first step to making claims for this coverage, adjusters are required to file preliminary damage assessment forms with FEMA for properties that may be substantially damaged. Figure 12 shows renovations in process on a New Orleans house that is being elevated to mitigate against future flood damage using ICC coverage to pay some of the costs. As of April 26, 2006, adjusters had completed almost 50,000 preliminary damage assessment forms for properties flooded by Hurricane Katrina and a little more than 1,000 forms for properties flooded by Hurricane Rita. Over 40,000 of the forms for damage in the two storms were for properties located in Louisiana. Through May 2006, FEMA had made ICC payments of about $7 million on Hurricane Katrina and Rita claims. Anticipating a large number of ICC claims as a result of the 2005 hurricane season, FEMA increased the time frame for property owners to complete the mitigation actions from 2 years to 4 years after a state or community issued a substantial damage declaration. In an upcoming revision to the standard flood insurance policy, FEMA plans to make permanent the increase in time for property owners to complete work and receive ICC payments. In addition to approving new methods for expedited processing of some NFIP claims after Hurricane Katrina, FEMA also took new steps to guide communities’ rebuilding efforts. For the first time, FEMA issued advisory guidance on coastal flood elevations that communities can use in the reconstruction process until more detailed data become available. According to FEMA officials, this guidance—called advisory base flood elevations—was necessary because a risk assessment showed that base flood elevations in effect for coastal Louisiana and Mississippi did not reflect the true risk to the areas from flooding. According to a FEMA official, FEMA expects to have updated rate maps for coastal areas by early 2007 so that communities can begin the process of considering to adopt them. Accurate flood maps that identify the areas at high risk of flooding are the foundation of the NFIP, and the flood maps for some areas of the Gulf Coast affected by Hurricanes Katrina and Rita were out of date. The maps identify base flood elevation levels—the height at which there is a 1 percent chance of a flood occurring in a given year, also known as the 100-year flood. FEMA uses the 100-year flood as the standard for setting premium rates and requirements for NFIP. Prior to Hurricanes Katrina and Rita, FEMA was conducting a coastal study of hurricane storm flooding as a part of its map modernization program. According to a FEMA official, the agency was about to issue several new preliminary flood insurance rate maps in the Gulf Coast region when the storms hit. However, the storm surges from Hurricanes Katrina and Rita far exceeded the base flood elevations in many areas of the Gulf Coast, raising questions about the validity of the base flood elevations and current flood insurance rate maps. In response, FEMA conducted risk assessments using the most current and accurate flood risk data available. The analyses incorporated storm data from the past 35 years, including data from Hurricanes Katrina and Rita, tide (water level) gauge data, and other engineering studies. The analyses showed that base flood elevations on the flood insurance rate maps in effect for coastal Louisiana and Mississippi did not reflect the true risk from flooding because the elevations were between 1 and 9 feet too low. Also, the analyses showed that higher storm surges and larger waves can be expected to spread farther inland than previously estimated because of land subsidence and the loss of the protective coastal barrier over the past 10 to 20 years. On completion of the risk analyses, FEMA issued advisory base flood elevation maps for 15 parishes in Louisiana and 3 counties in Mississippi that took into account the more accurate and up-to-date flood hazard data. (See app. III for a list of the communities for which the advisories were issued and the status of the communities’ consideration of their adoption.) FEMA cannot require communities participating in the NFIP to use the advisory base flood elevations. According to FEMA, it issued the advisories to parishes and counties, and individual communities within those jurisdictions can decide whether or not or to what extent they will adopt the guidance. For example, the City of Gulfport, Mississippi, adopted the advisories in September 2006 to protect citizens from future floods but extended the official adoption of the new elevations to November 1, 2006, to allow residents wishing to rebuild to less stringent elevation requirements in effect prior to the adoption of the advisories adequate time to secure building permits. The New Orleans city council approved FEMA’s new advisories but made exceptions for properties in the French Quarter and other national historic structures in the city and those listed with the Historic Districts Landmarks Commission. Lafourche Parish, Louisiana, rejected the advisory because the parish council considered some advisory map data to be wrong, determined that adopting the advisory would have a high negative economic impact on homeowners, and noted that the advisory information was intended to be only advisory and preliminary. However, FEMA has provided incentives for individual homeowners and communities to rebuild using the advisory standards. For example, FEMA requires that rebuilding projects it funds, through public assistance or mitigation grants, be built to advisory standards. Similarly, FEMA grants for repairing and rebuilding public infrastructure such as schools, libraries, and police stations will not be available to communities unless they rebuild to advisory base flood elevations. NFIP policyholders who live in communities that have flood plain management standards that exceed the minimum standard are eligible for discounts on their premiums. ICC payments to NFIP claimants that take steps to reduce their risk from future flood damage will help cover the elevation of homes to the advisory base flood elevation if that standard is adopted by the community. FEMA has also warned communities that continued use of flood data on current flood insurance rate maps could result in residential and commercial buildings that will be vulnerable to flood damage because they will not be built high enough or have the structural integrity to resist flood forces that may be encountered in future large events. According to a FEMA official, the agency expects to have updated, preliminary flood insurance rate maps for the coastal parishes and counties in Louisiana and Mississippi by early 2007. However, the maps will become effective only after a formal appeals process and community adoption; a process that normally takes a minimum of 2 years to complete. Once the new flood insurance rate maps are adopted, they will supersede all advisory base flood elevations issued by FEMA. As in previous flood events, FEMA’s primary method of monitoring and overseeing claims adjustments and addressing concerns from claimants was its quality reinspection program. As of August 2006, FEMA’s program contractor had conducted quality assurance reinspections of 4,316 Hurricane Katrina and Rita claims. In addition, FEMA formed a special task force to reinspect an additional 1,696 claims that were adjusted using expedited processes. Because FEMA did not reinspect a random sample of all claims closed, as we recommended in October 2005, the results of the reinspections cannot be projected to a population larger than the 4,316 claims reinspected. As a result, FEMA is unable to determine the overall accuracy of the claims closed. FEMA’s Deputy Director of the Mitigation Division said that FEMA agrees with our recommendation and plans to do quality reinspections in future flood events based on a random sample of the population of all claims. Neither FEMA nor its program contractor analyzed the overall results of the 4,316 quality reinspections for Hurricanes Katrina and Rita to identify the total number of payment errors and the magnitude of those errors. FEMA did not have a requirement that the overall results of the reinspections for flood events be analyzed. In our review of a statistically valid sample of 740 of the 4,316 reinspection reports, claims payment errors were identified in about 14 percent of the Hurricane Katrina reinspections of claims adjusted using regular processes, in about 1 percent of the reinspections of Hurricanes Katrina and Rita claims adjusted using expedited methods of claims adjustments, and 2 percent of Hurricane Rita reinspections of claims adjusted using regular processes. Because, in the past, FEMA has had neither an appropriate sampling methodology nor a requirement that an analysis be done of overall results of claims adjustments done after every flood event, we do not know how the error rates we identified compare to adjusting errors in reinspection reports for other smaller flood events. To determine whether claims were correctly adjusted by the large cadre of adjusters deployed after Hurricanes Katrina and Rita, FEMA’s program contractor conducted quality assurance reinspections of 4,316 Hurricane Katrina and Rita claims conducted from January to September 2006. The number of reinspections done was slightly smaller than the goal established by FEMA for the percentage of reinspections to be completed. However, FEMA officials told us in a briefing at the conclusion of our audit work that 5,198 reinspections had been completed. FEMA’s director of NFIP claims said that the program contractor was to reinspect about 3 percent of all claims, about the same percentage of reinspections done after other flood events. In addition, the contractor was to review at least 10 percent of the expedited claims done by each insurance company that decided to use expedited processing procedures for some claims. Reinspection reports completed as of September 2006 represented about 2.5 percent of all Hurricane Katrina and Rita claims that were closed by May 2006. Reinspection reports were completed for just over 10 percent of the 17,200 claims closed using expedited processes. The quality assurance reinspections are a standard oversight procedure after all flood events and are generally done by general adjusters who, in addition, are responsible for estimating damage from flood events, coordinating claims adjustment activities at disaster locations, and conducting adjuster training. When we did audit work for our October 2005 report, nine general adjusters were employed by FEMA’s program contractor. Four general adjusters were on board after Hurricanes Katrina and Rita, according to the general adjuster in charge. According to FEMA, one reason for the loss of general adjusters was that several left to work as independent adjusters or for adjusting firms to earn higher pay adjusting claims for Hurricanes Katrina and Rita. To supplement the general adjuster workforce, FEMA’s program contractor hired 22 temporary employees. In addition to overseeing the regular quality reinspection program of 4,136 reinspections of Hurricanes Katrina and Rita claims adjusted using regular processes and expedited methods, FEMA formed a special task force of 15 adjusters and supervisors to review and reinspect additional claims closed using expedited methods. FEMA officials said that they took this action because the expedited methods had not been used to adjust claims in prior flood events, so they wanted to have additional information on the accuracy of payments made. FEMA did not adopt our October 2005 recommendation that it select the claims to be reinspected in its quality reinspection program using a random sample of the population of all claims. Instead, according to the general adjuster in charge of Hurricanes Katrina and Rita, selection of claims to reinspect was based upon judgmental criteria including, among other items, the size and location of loss and complexity of claims. The general adjusters used their judgment to select what they thought were the more challenging claims adjustments for reinspection under the premise that if difficult adjustments are done accurately, more routine adjustments should be handled properly, as well. The process the general adjuster described is a nonprobability sampling process rather than random sampling. In nonprobability sampling, staff selected a sample based on their knowledge of the population’s characteristics. The major limitation of this type of sampling is that the results cannot be generalized to a larger population, because there is no way to establish, by defensible evidence, how representative the sample is. A nonprobability sample is therefore not appropriate to use is to generalize about the population from which the sample is taken. After discussion, FEMA agreed with GAO’s recommendation that it implement an approach for random sampling. The Deputy Director of FEMA’s Mitigation Division said that FEMA plans to do quality reinspections in future flood events based on a random sample of the universe of all claims. The official advised that FEMA was not able to implement the October 2005 recommendation in the aftermath of Hurricanes Katrina and Rita because other priorities to meet the needs of claimants and communities took precedence. Because the judgmental criteria were used in selecting reinspections to be done, the results of FEMA’s NFIP quality reinspection program for Hurricanes Katrina and Rita cannot be projected to a larger universe than the claims adjustments sampled. As a result, FEMA is unable to determine the overall accuracy of claims settled for these flood events—an action that is necessary to meet GAO’s internal control standard that FEMA have reasonable assurance that program objectives are being achieved and its operations are effective and efficient. Of FEMA’s 4,316 claims reinspections, 2,565 (about 59 percent) were for claims adjustments done using regular processes that included on-site visits by a certified flood adjuster to assess damages, while 1,751 (about 41 percent) were reinspections of claims adjusted using the expedited methods that FEMA authorized to settle some claims at policy limits without site visits by flood adjusters. FEMA’s program contractor did not analyze the overall results of its quality reinspection program for Hurricanes Katrina and Rita, another action that is necessary to meet our internal control standard that FEMA have reasonable assurance that program objectives are being achieved and its operations are effective and efficient. FEMA’s director of NFIP claims said that FEMA does not generally require the program contractor to prepare and analyze reports of the overall results of quality reinspections after flood events. According to officials of FEMA and its program contractor, in addition to preparing written reports of each quality assurance reinspection, general adjusters discuss the results of the reinspections they perform with insurance company officials that process the claims. If a general adjuster determines that an expense was allowed that should not have been covered, the company is to reimburse the NFIP. If a general adjuster finds that the private sector adjuster missed a covered expense in the original adjustment, the general adjuster will take steps to provide additional payment to the policyholder. According to officials of FEMA and its program contractor, quality assurance reinspections are forwarded from general adjusters to the program contractor, where results of reinspections are to be aggregated in a reinspection database and the resolution of overpayments and underpayments is tracked. According to the FEMA director of NFIP claims, a special task force of adjusters and supervisors reinspected 1,696 expedited claims from Hurricane Katrina in addition to the reinspections conducted in the quality reinspection program and found a total of 81 erroneous payments (about 5 percent). FEMA will take action to recover overpayments of claims where it is appropriate to do so. The official also stated that a report on the results of the task force review was being prepared, but it was not completed during the course of our review. We did not analyze data from the special task force as part of our review of a sample of quality reinspection reports. Because the NFIP’s quality reinspection program does not rely on a statistically valid sampling methodology, like FEMA, we are unable to project the results of our reviews of 740 reinspection reports to the population of all claims closed. However, because our sample is a probability sample of all 4,316 reinspections claims, are able to project our estimates to this population of claims reinspections. Our review of 320 of the quality reinspection reports done for Hurricane Katrina regular process claims found that reinspectors identified problems in 119 instances (about 37 percent). In most instances where quality reinspections identified problems with the original claims adjustments, reinspectors determined that the claims payment amounts were correct but that files did not meet NFIP standards (e.g., they did not include all supporting documentation). However, 44 of the 320 quality reinspection reports we reviewed for Hurricane Katrina claims adjustments that used regular processes (about 14 percent) identified claims overpayments or underpayments. Payment errors identified in our review included 8 underpayments that ranged from more than $131,000 to $543 and 36 overpayments that ranged from $65,000 to $86. For the expedited process reinspection reports, we identified problems in about 12 percent (39 of 320) reports we reviewed. However, reinspectors identified erroneous overpayments in only 4 of these instances (about 1 percent). These payment errors were all overpayments that ranged from $40,000 to $80,000. On the basis of our review of 100 Hurricane Rita reinspections, we estimate that about 2 percent of the reinspections identified erroneous payments. These payment errors were between $9,000 and $10,000. Because, in the past, FEMA has had neither an appropriate sampling methodology nor a requirement for an overall analysis claims adjustment done after every flood event, we do not know how the error rates we identified compare to adjusting errors identified in reinspections of claims from other smaller flood events. See appendix IV for the complete results of our review of 740 quality reinspection reports for claims adjustments after Hurricanes Katrina and Rita. Since we last reported in October 2005, FEMA has moved forward on implementation of the Flood Insurance Reform Act of 2004. However, there is still progress to be made. Among other things, the act mandated FEMA to (1) develop supplemental materials for explaining NFIP coverage and the claims process to policyholders when they purchase and renew policies; (2) establish, by regulation, an appeals process for claimants; and (3) establish minimum training and education requirements for flood insurance agents in cooperation with the insurance industry, state insurance regulators, and other interested parties and publish the requirements in the Federal Register. The statutory deadline for the three mandates was December 30, 2004. The act also authorized FEMA to create a pilot program to provide financial assistance to states and communities to carry out activities including elevating and demolishing structures that have suffered severe and repeated damage from flooding. The act authorized the use of funds from the National Flood Insurance Fund for the pilot program for fiscal years 2005 through 2009. FEMA has fully implemented the first two requirements to establish notifications on coverage to policyholders and an appeals process for claimants. With regard to the training and education requirements, FEMA published training and education requirements in the Federal Register, stating that it intended to implement the standards through existing state licensing schemes for insurance agents. Though FEMA has taken a number of actions to improve the training and education of agents that sell NFIP policies, only 15 states implemented mandatory training and education requirements as of October, 2006 and as we reported in October 2005, FEMA has not established how or when states are to begin imposing education and training requirements. Finally, FEMA has not created a pilot program to mitigate damage to severe repetitive loss properties. For purposes of explaining coverage and the claims process to policyholders, the Flood Insurance Reform Act of 2004 required FEMA to develop three types of informational materials. The required materials are (1) supplemental forms explaining in simple terms the exact coverage being purchased; (2) an acknowledgment form that the policyholder received the standard flood insurance policy and any supplemental explanatory forms, as well as an opportunity to purchase coverage for personal property; and (3) a flood insurance claims handbook describing the process for filing and appealing claims. FEMA officials said that acknowledgment forms and new insurance program forms to explain coverage to policyholders when they purchase and renew their insurance were final as of September 2005. FEMA posted a flood insurance claims handbook, dated July 2005, on its Web site in September 2005. The handbook contains information on anticipating, filing, and appealing a claim. The Director of the FEMA Mitigation Division, which oversees the NFIP, said that FEMA distributed the NFIP Summary of Coverage and Flood Insurance Claims Handbook to help policyholders affected by Hurricane Katrina through the claims process. The materials were available in disaster recovery and flood response offices and were distributed in town meetings. In addition, according to a representative of FEMA’s program contractor on-site in Hammond, Louisiana, some flood adjusters provided copies of the documents to claimants to help to explain the processes for filing claims and resolve any disagreements about the claims settlement. An appeals process that FEMA officials described as informal was in place for claimants after Hurricane Katrina and was described in the Flood Insurance Claims Handbook that FEMA posted on its Web site in September 2005. As we have stated in this report, 13 appeals were filed by claimants related to settlements of their NFIP claims as a result of Hurricane Katrina damage, and no appeals were filed for damage resulting from Hurricane Rita, as of April 2006. To establish a formal appeals process, FEMA published an interim rule in the Federal Register that became effective in June 2006. Comments made in the Additional Views section of the Senate report on the Flood Insurance Reform and Modernization Act of 2006, a bill pending in Congress as of November 2006, outlined concerns that the rule was not specific on the structure of the appeals process. After a public comment period, a final rule was published on October 13, 2006. The final rule included more specific elements on the structure of the appeals process in the final rule than were contained in the interim rule. For example, the final rule stated that FEMA will provide policyholders with an acknowledgment of receipt of an appeal, which will also provide the policyholder with a point of contact within FEMA to get information on the status of the appeal, and that FEMA is subject to a 90-day deadline to resolve appeals and issue a written appeal decision to the policyholder and insurer. The final regulation also provided examples of the types of documentation that policyholders should include in their appeals. With respect to the requirement that FEMA establish minimum education and training requirements for agents who sell NFIP policies, the Flood Insurance Reform Act of 2004 requires FEMA, in cooperation with the insurance industry, state insurance regulators, and other interested parties, to establish minimum training and education requirements for all insurance agents who sell flood insurance policies and to publish the requirements in the Federal Register. On September 1, 2005, FEMA published a Federal Register notice in response to this requirement. In the notice, FEMA stated that rather than establish separate and perhaps duplicative requirements from those that may already be in place in the states, it had chosen to work with the states to implement NFIP requirements through already established state licensing schemes for insurance agents. To that end, FEMA provided suggested language for state legislation to require a prelicensing demonstration of knowledge of flood insurance and a onetime, 3-hour continuing education course requirement for existing licensees. FEMA further provided a course outline for flood insurance agents, which consisted of eight sections: an NFIP Overview; Flood Maps and Zone Determinations, Policies and Products Available, General Coverage Rules, Building Ratings, Claims Handling Process, Requirements of the Flood Insurance Reform Act of 2004, and Agent Resources. FEMA also offered incentives to agents who completed NFIP training to encourage adoption of the minimum standards. For fiscal years 2006 and 2007, FEMA adopted performance measures for meeting “the objective of the mandate that agents selling flood insurance are trained and provide good information to consumers.” The performance measures center on FEMA activities to encourage agent training activities, but do not establish milestones for states to implement the minimum training requirements. Specifically, the performance measures are to increase by 7 percent over the previous year the number of insurance agents who complete the NFIP Bureau’s flood insurance training, either live or online; submit a new online training module to states for continuing education credit approval, with approval by 40 states by fiscal year 2008; encourage write-your-own companies to do their part to ensure their agents are sufficiently trained, and foster state adoption of mandatory agent training requirements through continued communication with departments of insurance, offering technical assistance, and so forth. In working toward the final performance measure, FEMA held meetings and conferences with state legislators and insurance regulators, as well as insurance company officials, and worked with the National Association of Insurance Commissioners to develop a model bulletin that state insurance commissioners may issue to implement the minimum training requirements. As of October 2006, only 15 states had established minimum training and education requirements for insurance agents that sell NFIP policies. Two states had issued advisory notices, and 1 state had established standards for a continuing education course in flood insurance but had not made the course mandatory. As we reported in October 2005, FEMA has not developed milestones for state adoption of minimum training and education requirements. See appendix V for a listing of the state actions taken. As of October 2006, FEMA had not implemented the pilot program authorized by the act to help reduce the inventory of NFIP properties that have sustained repeated severe flood losses. As noted in the report of the Senate Committee on Banking, Housing, and Urban Affairs accompanying the legislation, an important purpose of the act is to address the problem of severe repetitive loss properties, which are properties that have been flooded numerous times and are thus a financial drain on the NFIP. The act authorizes financial assistance to states and communities that decide to participate in the pilot program to carry out mitigation activities that reduce flood damages to severe repetitive loss properties. The act authorizes the transfer of up to $40 million per fiscal year for fiscal years 2005 through 2009 from the NFIP Fund for the pilot program, and funds for the program were appropriated in fiscal year 2006. States and communities may use funds under this program for the mitigation of severe repetitive loss properties. Mitigation actions may include purchase, relocation, demolition, elevation, or flood-proofing structures, as well as minor physical localized flood control projects. Funds may also be used by states and communities to purchase severe repetitive loss properties. FEMA officials noted that they had made progress in developing the program guidance and implementing regulations for the pilot program and plan to combine the fiscal years 2006 and 2007 appropriations and begin funding projects under the pilot program in fiscal year 2007. By the measures of number of claims filed, amount of claims paid, losses per claim, and debt incurred, Hurricane Katrina was an unprecedented event for the NFIP that created challenges to process a record number of claims and address needs of claimants and communities that experienced grave losses. FEMA approved new methods of adjusting some Hurricane Katrina and Rita claims, issued advisory opinions to aid in rebuilding after these flood events, and took other actions to address the needs of NFIP claimants and communities. However, the importance of FEMA taking additional actions to enhance the value of its monitoring and oversight processes is also illustrated in the aftermath of Hurricanes Katrina and Rita. Not only did these flood events involve billions more dollars and hundreds of thousands more claims for the NFIP than any previous flood event since the program’s inception, but they also involved new claims-processing methods that, if proven to result in accurate claims adjustments, could lower NFIP payments for claims adjustments as compared to fees paid for the more time-consuming room- by-room, line-item-by-line-item visual assessments of flood damage that the NFIP had exclusively relied upon for all prior flood events. FEMA’s current use of quality assurance reinspections to discuss individual results and specific adjustment errors with insurance company officials and seek reimbursements for overpayments is too limited to meet our internal control standard that it have reasonable assurance that program objectives are being achieved and its operations are effective and efficient. For future flood events, when FEMA conducts its quality assurance reinspection program for claims adjustments using the statistically valid sampling methodology we previously recommended, the agency will be well positioned to broaden the scope of its analyses to determine the overall results of claims adjustments done for each future flood event, including the number and type of claims adjustment errors that occurred. FEMA made progress in implementing provisions of the Flood Insurance Reform Act of 2004. However, our recommendation that FEMA establish milestones for meeting provisions of the act remains open. In October 2005, we recommended that FEMA develop a documented plan with milestones for ensuring that agents that sell NFIP policies meet minimum training and education requirements. FEMA has taken a number of actions, including outreach to the states, to encourage the implementation of minimum training standards. However, given the somewhat slow progress among states to adopt mandatory training requirements, we continue to think that FEMA should elaborate on the state implementation performance measure by developing a documented plan with milestones for state adoption of minimum training and education requirements and our recommendation related to the minimum training and education requirements remains open. To strengthen and improve FEMA’s monitoring and oversight of the NFIP, including ensuring that claims payments are accurately determined, we are recommending that for future flood events when FEMA implements our prior recommendation to do quality assurance reinspections of a statistically valid sample of claims adjustments, the Secretary of the Department of Homeland Security also direct the Under Secretary of Homeland Security, FEMA, to take the following action: Analyze the overall results of claims adjustments done for each future flood events to determine the number and type of claims adjustment errors made and to help determine whether new, cost-efficient methods for adjusting claims that were introduced after Hurricane Katrina are feasible to use after other flood events. On December 8, 2006, DHS provided written comments on a draft of this report. DHS agreed with our recommendation to improve its quality reinspection program and stated that it was revising its guidance accordingly and would use the recommended sampling and reporting procedures in future flood events. DHS reiterated a comment made in FEMA’s review of our October 2005 report that we did not review all of the controls and processes that FEMA has in place to provide oversight for the NFIP. Most of the additional oversight and management processes and controls that FEMA has in place are for financial management—an area not included in the scope of our work. Our work focused on program implementation and oversight in the aftermath of Hurricanes Katrina and Rita. During our review, FEMA managers described the quality assurance claims reinspection program as the primary method for overseeing the accuracy of claims adjustments for these flood events. As we have noted in this report, we have work under way to examine the cost of operating the NFIP, including fees paid for the services of private insurance companies and claims adjusters. For that report, to be issued in 2007, we plan to examine the NFIP’s financial management and controls. DHS also provided information on how it determines the number of claims to be reinspected in the NFIP’s quality reinspection program and additional information on its implementation of the requirement of the Flood Insurance Reform Act of 2004 to establish minimum training and education requirements for all insurance agents who sell flood insurance policies and to publish the requirements in the Federal Register. We are sending copies of this report to the Secretary of the Department of Homeland Security, the Director of the Federal Emergency Management Agency and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact William Jenkins at (202) 512-8757 or [email protected] if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. To describe the impact of Hurricanes Katrina and Rita on the National Flood Insurance Program (NFIP) and the extent of the losses paid by location and property type, we reviewed congressional actions to increase the NFIP borrowing authority, and we interviewed the Director, Deputy Director, and other officials of the Federal Emergency Management Agency’s (FEMA) Mitigation Division on the actions they took to estimate the amount of funds they needed to borrow from the U.S. Treasury to cover claims from Hurricanes Katrina and Rita and other 2006 flood events. We compared claims payments for losses from Hurricanes Katrina and Rita to payments for losses from past flood events. We also analyzed statistical data from the NFIP data system on claims payments for Hurricanes Katrina and Rita. We analyzed the data on losses paid by state, for principal and nonprincipal residential properties, within and outside of special flood hazard areas, and by type of coverage (i.e., building, contents, or both building and contents). We updated our reliability assessment of the statistical data base reported in October 2005 by interviewing database managers to discuss any system changes that would have an impact on data reliability and by replicating statistical analyses by the NFIP to determine their accuracy. We determined that the database was sufficiently reliable for our reporting purposes. We did our analyses and reliability testing of FEMA statistical data that were current through May 31, 2006, when FEMA reported that over 95 percent of Hurricane Katrina and Rita claims were closed. To describe the challenges FEMA and its private sector partners faced and the results of their efforts to process flood claims resulting from Hurricanes Katrina and Rita and address the needs of NFIP claimants and communities, we interviewed headquarters and field officials of FEMA and its program contractor. We also conducted semistructured interviews based on our judgment with insurance industry officials involved in the recovery effort and visited areas impacted by Hurricane Katrina in New Orleans, Louisiana, and Bayou La Batre, Alabama. Interviewees included the owner of a firm that specializes in insurance claims adjustments for catastrophes, representatives of the three insurance companies that closed the largest number of Hurricane Katrina and Rita NFIP claims, and a representative of an insurance company that was not a major NFIP insurer for the Gulf Coast claimants but did process some claims. Their views are not representative of the universe of all insurance industry officials involved in the flood recovery effort. We also analyzed statistical data on the number of appeals filed by claimants and requests made for reinspections by FEMA’s program contractor to assist claimants and insurance companies in reaching resolutions on disputes. We reviewed documentation and talked with officials about new, expedited methods of claims processing FEMA approved. We examined preliminary data on claims that may be filed for coverage under the standard flood insurance policy for up to $30,000 for some property owners to take actions to reduce their risk of future flood damage. Finally, we examined documentation and interviewed FEMA officials on the status of efforts to provide guidance to communities and property owners to assist in recovery and rebuilding efforts and reviewed documentation on the status of communities’ actions to adopt FEMA’s advisory base flood elevation standards. To assess FEMA’s role in monitoring and overseeing the NFIP and the results of that oversight, we interviewed officials of FEMA and its program contractor who were involved in the quality assurance reinspections of claim adjustments done for Hurricanes Katrina and Rita and documented the number of reinspections performed and the methodology used to select claims for reinspection. We reviewed documentation of FEMA’s procedures for monitoring and overseeing claims adjustments. We observed a disaster analyst for FEMA’s program contractor performing several quality assurance reinspections in Bayou La Batre. We followed up on the status of our prior recommendation for improvements in the quality assurance reinspection program and discussed actions taken or planned to implement it. We selected a statistically valid sample of 740 reinspection reports done for Hurricanes Katrina and Rita to review to determine, among other things, errors that were identified in the claims adjustments. Using a data collection instrument, we reviewed the results of these randomly selected reinspection reports of Hurricane Katrina and Rita claims to determine whether reinspectors identified errors, including overpayment, underpayments, or adjustments that did not meet NFIP standards (i.e. did not contain appropriate documentation). Table 3 shows the number of quality assurance reinspection reports of claims adjustments done using regular processes we examined, including site visits by flood adjusters and expedited methods FEMA approved for some Hurricane and Rita claims. To assess the status of FEMA’s efforts to implement provisions of the Flood Insurance Reform Act of 2004 after Hurricanes Katrina and Rita, we interviewed officials and examined documentation of the actions FEMA took. We also analyzed FEMA’s actions to determine whether they met the legal requirements of the act. We conducted our work in accordance with generally accepted government auditing standards from December 2005 through November 2006. At the time of our review, 11 of the 15 Louisiana parishes where FEMA issued advisory flood elevation guidance had adopted FEMA’s advisories. Two parishes, St. John the Baptist and Lafourche, had decided not to adopt the advisories; and two others, Plaquemines and St. Bernard, were considering them. The Lafourche Parish council rejected the advisory because it considered some advisory map data to be wrong and determined that adopting the advisory would have a high negative economic impact on homeowners. The council also noted that the advisory information was intended to be only advisory and preliminary. Fourteen cities within the 3 Mississippi counties where FEMA issued advisory flood elevation guidance had taken some new action to guide rebuilding efforts. Department of Insurance required agents authorized to write homeowners or personal lines of insurance to complete a 2-hour continuing education course on flood insurance and the NFIP. Commissioner of Insurance required agents who sell flood insurance to comply with the minimum training and education requirements and demonstrate that compliance upon request of the Commissioner. Commissioner of Insurance required agents who sell flood insurance to complete a onetime, 3-hour course related to the NFIP, beginning with license renewals on January 1, 2007. Office of Insurance issued advisory opinion stating requirement that agents selling NFIP policies complete a onetime, 3-hour course related to the NFIP. Legislature required a onetime, 3-hour course on flood insurance to be completed by agents authorized to write property and casualty lines of insurance for initial licensure and/or license renewal. Superintendent of Insurance directed licensed insurance agents who sell NFIP policies to comply with the minimum training and education requirements and demonstrate that compliance upon request of the bureau. Department of Insurance required property casualty insurance producers who sell flood insurance to complete at least two of their required continuing education credits in flood insurance by September 30, 2006, regardless of when their licenses renew, and each renewal period thereafter. Commissioner of Insurance required agents licensed after April 4, 1983, who sell flood insurance to complete 3 hours of continuing education on flood insurance by December 31, 2006. Department of Insurance required agents who sell flood insurance to complete at least 3 hours of NFIP-related training by December 31, 2009. Department of Insurance required agents who sell flood insurance to complete a onetime, 3-hour course on flood insurance beginning with license renewals on January 1, 2007. Commissioner of Insurance directed licensed insurance agents who sell NFIP policies to complete a onetime, 3- hour course on flood insurance. Commissioner of Insurance sent letters to insurance agents who met their 2005 continuing education requirements that encouraged them to take a continuing education course on flood insurance. Insurance Department issued a notice advising insurance companies and agents of the training and education requirements and encouraging agents to attend NFIP flood insurance program workshops. Department of Business Regulation directed licensed insurance agents who sell NFIP policies to comply with the minimum training and education requirements and demonstrate that compliance upon request of the department. Director of the Division of Insurance directed licensed insurance agents who sell NFIP policies to comply with the minimum training and education requirements and demonstrate that compliance upon request of the division. Department of Insurance adopted new sections of its Insurance Code establishing standards for a department- certified continuing education course on the NFIP and flood insurance. Commissioner of Insurance directed licensed insurance agents who sell NFIP policies to comply with the minimum training and education requirements and demonstrate that compliance upon request of the department. Commissioner of Insurance directed agents who sell flood insurance policies to complete a onetime, 3-hour course on flood insurance. Christoper Keisling, Assistant Director; Richard Ascarate, John Bagnulo, Amy Bernstein, Christine Davis, Dewi Djunaidy, Wilfred Holloway, Tracey King, Deborah Knorr, Jan Montgomery, Mark Ramage, and Jesus Ramoz made significant contributions to this report.
|
In August and September 2005, Hurricanes Katrina and Rita caused unprecedented destruction to property along the Gulf Coast, resulting in billions of dollars of damage claims to the National Flood Insurance Program (NFIP). This report, which we initiated under the authority of the Comptroller General, examines (1) the impact of Hurricanes Katrina and Rita on the NFIP and paid losses by location and property type; (2) the challenges the Federal Emergency Management Agency (FEMA) and others faced in addressing the needs of NFIP claimants and communities; (3) FEMA's methods of monitoring and overseeing claims adjustments; and (4) FEMA's efforts to meet the requirements of the Flood Insurance Reform Act of 2004 to establish policyholder coverage notifications, an appeals process for claimants, and education and training requirements for agents. To conduct these assessments, GAO interviewed FEMA and insurance officials, analyzed claims data, and examined a sample of reports done on the accuracy of claims adjustments. NFIP paid an unprecedented dollar amount for a record number of claims from Hurricanes Katrina and Rita. Congress increased NFIP's borrowing authority with the U.S. Treasury from a pre-Katrina level of $1.5 billion to about $20.8 billion in March 2006, but FEMA will probably not be able to repay this debt on annual premium revenues of about $2 billion. As of May 2006, NFIP had paid approximately 162,000 flood damage claims from Hurricane Katrina and another 9,000 claims from Hurricane Rita. Most paid claims were for primary residences where flood insurance was generally required. FEMA and its private sector partners faced several challenges in processing a record number of flood claims from Hurricanes Katrina and Rita, among them were (1) reaching insured properties in a timely way because of blocked roadways and flood water contamination and (2) identifying badly damaged homes to be inspected in locations where street signs had washed away. Despite these and other obstacles, FEMA reported that over 95 percent of Gulf Coast claims had been closed by May 2006, a time frame comparable to those for closing claims in other, smaller recent floods. To help keep pace with the volume of claims filed, FEMA approved expedited methods for claims processing that were unique to Hurricanes Katrina and Rita. To provide oversight of the claims adjustment process, FEMA's program contractor did quality assurance reinspections of Hurricane Katrina and Rita claims adjustments. FEMA did not adopt our October 2005 recommendation that it select the claims to be reinspected from a random sample of the universe of all closed claims; thus, the results of the reinspections cannot be projected to a universe larger than the 4,316 claims adjustments that were reinspected. FEMA agrees with our prior recommendation and plans to do quality reinspections in future flood events based on a random sample of all claims. FEMA did not analyze the overall results of the quality reinspections for Hurricanes Katrina and Rita. FEMA has made progress but has not fully implemented the NFIP program changes mandated by the Flood Insurance Reform Act. For example, 15 states had adopted minimum education and training requirements for insurance agents who sell NFIP policies, as of October 2006.
|
Both DOE and DOD have established offices; designated staff; and promulgated policies, manuals, and guides to provide a framework for the OUO and FOUO programs. However, based on our assessment of the policies governing both DOE’s and DOD’s programs, their policies to assure that unclassified but sensitive information is appropriately identified and marked lack sufficient clarity in important areas that could allow for inconsistencies and errors. DOE policy clearly identifies the office responsible for the OUO program and establishes a mechanism to mark the FOIA exemption used as the basis for the OUO designation on a document. However, our analysis of DOD’s FOUO policies shows that it is unclear which DOD office is responsible for the FOUO program, and whether personnel designating a document as FOUO should note the FOIA exemption used as the basis for the designation on the document. Also, both DOE’s and DOD’s policies are unclear regarding at what point a document should be marked as OUO or FOUO, and what would be an inappropriate use of the OUO or FOUO designation. In our view, this lack of clarity exists in both DOE and DOD because the agencies have put greater emphasis on managing classified information, which is more sensitive than OUO or FOUO information. DOE’s OUO program was created in 2003 and DOD’s FOUO program has been in existence since 1968. Both programs use the exemptions in FOIA for designating information in a document as OUO or FOUO. Table 1 outlines these exemptions. The Federal Managers Financial Improvement Act of 1982 states that agencies must establish internal administrative controls in accordance with the standards prescribed by the Comptroller General. The Comptroller General published such standards in Standards for Internal Control in the Federal Government, which sets out management control standards for all aspects of an agency’s operation. These standards are intended to provide reasonable assurance of meeting agency objectives, and should be recognized as an integral part of each system that management uses to regulate and guide its operations. One of the standards of internal control—internal control activities—states that appropriate policies, procedures, techniques, and mechanisms should exist with respect to each of the agency’s activities and are an integral part of an agency’s planning, implementing, and reviewing. DOE’s Office of Security issued an order, a manual, and a guide in April 2003 to detail the requirements and responsibilities for DOE’s OUO program and to provide instructions for identifying, marking, and protecting OUO information. According to DOE officials, the agency issued the order, manual, and guide to provide guidance on how and when to identify information as OUO and eliminate various additional markings, such as Patent Caution or Business Sensitive, for which there was no law, regulation, or DOE directive to inform staff how such documents should be protected. The overall goal of the order was to establish a policy consistent with criteria established in FOIA. DOE’s order established the OUO program and laid out, in general terms, how sensitive information should be identified and marked, and who is responsible for doing so. The guide and the manual supplement the order. The guide provides more detailed information on the eight applicable FOIA exemptions to help staff decide whether exemption(s) may apply, which exemption(s) may apply, or both. The manual provides specific instructions for managing OUO information, such as mandatory procedures and processes for properly identifying and marking this information. For example, the employee marking a document is required to place on the front page of the document an OUO stamp that has a space for the employee to identify which FOIA exemption is believed to apply; the employee’s name and organization; the date; and, if applicable, any guidance the employee may have used in making this determination. According to one senior DOE official, requiring the employee to cite a reason why a document is designated as OUO is one of the purposes of the stamp, and one means by which DOE’s Office of Classification encourages practices consistent with the order, guide, and manual throughout DOE. Figure 1 shows the DOE OUO stamp. The current DOD regulations are unclear regarding which DOD office controls the FOUO program. Although responsibility for the FOUO program was shifted from the Director for Administration and Management to the Office of the Assistant Secretary of Defense, Command, Control, Communications, and Intelligence (now the Under Secretary of Defense, Intelligence) in October 1998, this shift is not reflected in current regulations. Guidance for DOD’s FOUO program continues to be included in regulations issued by both offices. As a result, there is currently a lack of clarity regarding which DOD office has primary responsibility for the FOUO program. According to a DOD official, this lack of clarity causes personnel who have FOUO questions to contact the wrong office. The direction provided in Standards for Internal Control in the Federal Government states that an agency’s organizational structure should clearly define key areas of authority and responsibility. A DOD official said that they began coordination of a revised Information Security regulation covering the FOUO program at the end of January 2006. The new regulation will reflect the change in responsibilities and place greater emphasis on the management of the FOUO program. DOD currently has two regulations, issued by each of the offices described above, containing similar guidance that addresses how unclassified but sensitive information should be identified, marked, handled, and stored. Once information in a document has been identified as FOUO, it is to be marked For Official Use Only. However, unlike DOE, DOD has no departmentwide requirement to indicate which FOIA exemption may apply to the information, except when it has been determined to be releasable to a federal governmental entity outside of DOD. We found, however, that one of the Army’s subordinate commands does train its personnel to put an exemption on any documents that are marked as FOUO, but does not have this step as a requirement in any policy. In our view, if DOD were to require employees to take the extra step of marking the exemption that may be the reason for the FOUO designation at the time of document creation, it would help assure that the employee marking the document has at least considered the exemptions and made a thoughtful determination that the information fits within the framework of the FOUO designation. Including the FOIA exemption on the document at the time it is marked would also facilitate better agency oversight of the FOUO program since it would provide any reviewer/inspector with an indication of the basis for the marking. Both DOE’s and DOD’s policies are unclear at what point to actually affix the OUO or FOUO designation to a document. If a document is not marked at creation, but might contain information that is OUO or FOUO and should be handled as such, it creates a risk that the document could be mishandled. DOE policy is vague about the appropriate time to apply a marking. DOE officials in the Office of Classification stated that their policy does not provide specific guidance about at what point to mark a document because such decisions are highly situational. Instead, according to these officials, the DOE policy relies on the “good judgment” of DOE personnel in deciding the appropriate time to mark a document. Similarly, DOD’s current Information Security regulation addressing the FOUO program does not identify when a document should be marked. In contrast, DOD’s September 1998 FOIA regulation, in a chapter on FOUO, states that “the marking of records at the time of their creation provides notice of FOUO content and facilitates review when a record is requested under the FOIA.” In our view, a policy can provide flexibility to address highly situational circumstances and also provide specific guidance and examples of how to properly exercise this flexibility. In addition, we found both DOE’s and DOD’s OUO and FOUO programs lack clear language identifying examples of inappropriate use of OUO or FOUO markings. According to Standards for Internal Control in the Federal Government, agencies should have sufficient internal controls in place to mitigate risk and assure that employees are aware of what behavior is acceptable and what is unacceptable. Without explicit language identifying inappropriate use of OUO or FOUO markings, DOE and DOD cannot be confident that their personnel will not use these markings to conceal mismanagement, inefficiencies, or administrative errors or to prevent embarrassment to themselves or their agency. Standards for Internal Control in the Federal Government discusses the need for both training and continuous program monitoring as necessary components of a good internal control program. However, while both DOE and DOD offer training to staff on managing OUO and FOUO information, neither agency requires any training of its employees before they are allowed to identify and mark information as OUO or FOUO, although some staff will eventually take OUO or FOUO training as part of other mandatory training. In addition, neither agency has implemented an oversight program to determine the extent to which employees are complying with established policies and procedures. DOE and DOD officials told us that limited resources, and in the case of DOE, the newness of the program, have contributed to the lack of training requirements and oversight. While many DOE units offer training on DOE’s OUO policy, DOE does not have a departmentwide policy that requires OUO training before an employee is allowed to designate a document as OUO. As a result, some DOE employees may be identifying and marking documents for restriction from dissemination to the public or persons who do not need to know the information to perform their jobs and yet may not be fully informed as to when it is appropriate to do so. At DOE, the level of training that employees receive is not systematic and varies considerably by unit, with some requiring OUO training at some point as a component of other periodic employee training, and others having no requirements at all. For example, most of DOE’s approximately 10,000 contractor employees at the Sandia National Laboratories in Albuquerque, New Mexico, are required to complete OUO training as part of their annual security refresher training. In contrast, according to the senior classification official at Oak Ridge, very few staff received OUO training at DOE’s Oak Ridge Office in Oak Ridge, Tennessee, although staff were sent general information about the OUO program when it was launched in 2003 and again in 2005. Instead, this official provides OUO guidance and other reference and training materials to senior managers with the expectation that they will inform their staff on the proper use of OUO. DOD similarly has no departmentwide training requirements before staff are authorized to identify, mark, and protect information as FOUO. The department relies on the individual services and components within DOD to determine the extent of training employees receive. When training is provided, it is usually included as part of a unit’s overall security training, which is required for many but not all employees. There is no requirement to track which employees received FOUO training, nor is there a requirement for periodic refresher training. Some DOD components, however, do provide FOUO training for employees as part of their security awareness training. Neither DOE nor DOD knows the level of compliance with OUO and FOUO program policies and procedures because neither agency conducts any oversight to determine whether the OUO and FOUO programs are being managed well. According to a senior manager in DOE’s Office of Classification, the agency does not review OUO documents to assess whether they are properly identified and marked. This condition appears to contradict the DOE policy requiring the agency’s senior officials to assure that the OUO programs, policies, and procedures are effectively implemented. Similarly, DOD does not routinely review FOUO information to assure that it is properly managed. Without oversight, neither DOE nor DOD can assure that staff are complying with agency policies. We are aware of at least one recent case in which DOE’s OUO policies were not followed. In 2005, there were several stories in the news about revised estimates of the cost and length of the cleanup of high-level radioactive waste at DOE’s Hanford Site in southeastern Washington. This information was controversial because there is a history of delays and cost overruns associated with this multibillion dollar project, and DOE was restricting a key document containing recently revised cost and time estimates from being released to the public. This document, which was produced by the U.S. Army Corps of Engineers for DOE, was marked Business Sensitive by DOE. However, according to a senior official in the DOE Office of Classification, Business Sensitive is not a recognized marking in DOE. Therefore, there is no DOE policy or guidance on how to handle or protect documents marked with this designation. This official said that if information in this document needed to be restricted from release to the public, then the document should have been stamped OUO and the appropriate FOIA exemption should have been marked on the document. The lack of clear policies, effective training, and oversight in DOE’s and DOD’s OUO and FOUO programs could result in both over- and underprotection of unclassified yet sensitive government documents that may need to be limited from disclosure to the public or persons who do not need to know such information to perform their jobs to prevent potential harm to governmental, commercial, or private interests. Having clear policies and procedures in place, as discussed in Standards for Internal Control in the Federal Government, can mitigate the risk that programs could be mismanaged and can help DOE and DOD management assure that OUO or FOUO information is appropriately marked and handled. DOE and DOD have no systemic procedures in place to assure that staff are adequately trained before designating documents OUO or FOUO, nor do they have any means of knowing the extent to which established policies and procedures for making these designations are being complied with. These issues are important because they affect DOE’s and DOD’s ability to assure that the OUO and FOUO programs are identifying, marking, and safeguarding documents that truly need to be protected in order to prevent potential damage to governmental, commercial, or private interests. To assure that the guidance governing the FOUO program reflects the necessary internal controls for good program management, we recommend that the Secretary of Defense take the following two actions: revise the regulations that currently provide guidance on the FOUO program to conform to the 1998 policy memo designating which office has responsibility for the FOUO program and revise any regulation governing the FOUO program to require that personnel designating a document as FOUO also mark the document with the FOIA exemption used to determine the information should be restricted. We also recommend that the Secretaries of Energy and Defense take the following two actions to clarify all guidance regarding the OUO and FOUO designations: identify at what point the document should be marked as OUO or FOUO and define what would be an inappropriate use of the designations OUO or FOUO. To assure that OUO and FOUO designations are correctly and consistently applied, we recommend that the Secretaries of Energy and Defense take the following two actions: assure that all employees authorized to make OUO and FOUO designations receive an appropriate level of training before they can mark documents and develop a system to conduct periodic oversight of OUO and FOUO designations to assure that information is being properly marked and handled. In commenting on a draft of this report, both DOE and DOD agreed with the findings of the report and with most of the report’s recommendations. DOE agreed with our recommendations to clarify its guidance to identify at what point a document should be marked OUO and define what would be an inappropriate use of OUO. They also agreed with our recommendation that all employees authorized to make OUO designations receive training before they can mark documents. DOD concurred with our recommendations to revise the regulations designating which office has responsibility for the FOUO program, to clarify guidance regarding at what point to mark a document as FOUO and to define inappropriate usage of the FOUO designation, and to assure that all employees authorized to make FOUO designations receive appropriate training. Both DOE and DOD partially concurred with our recommendation to develop a system to conduct periodic oversight of OUO or FOUO designations. They agreed with developing a system for periodic oversight of OUO or FOUO designations, but disagreed with the recommendation in our draft report to conduct period reviews of OUO or FOUO information to determine if the information continues to require that designation. DOE stated that much of the information designated as OUO is permanent by nature—such as information related to privacy and proprietary interests— and a systematic review would “primarily serve to correct a small error rate that would be better addressed by additional training and oversight.” In its comments, DOD stated that such a review would not be an efficient use of limited resources because “all DOD information, whether marked as FOUO or not, is specifically reviewed for release when disclosure to the public is desired by the Department or requested by others. Any erroneous or improper designation as FOUO is identified and corrected in this review process and the information released as appropriate. Thus, information is not withheld from the public based solely on the initial markings applied by the originator.” Based on DOE’s and DOD’s comments, we believe the agencies have agreed to address the principal concern that led to our original recommendation. We therefore have modified the report and our recommendation to focus on the need for periodic oversight of the OUO and FOUO programs by deleting the portion of the recommendation calling for a periodic review of the information to determine if it continues to require an OUO or FOUO designation. DOD did not concur with our recommendation to require that personnel designating a document as FOUO also mark the document with the applicable FOIA exemption(s). DOD stated that “if the individual erroneously applies an incorrect/inappropriate FOIA exemption to a document, then it is possible that other documents that are derivatively created from this document would also carry the incorrect FOIA exemption or that the incorrect designation could cause problems if a denial is litigated. Additionally, when the document is reviewed for release to the public, the annotated FOIA exemption may cause the reviewer to believe that the document is automatically exempt from release and not perform a proper review.” However, we believe that the practice of citing the applicable FOIA exemption(s) will not only increase the likelihood that the information is appropriately marked as FOUO, but will also foster consistent application of the marking throughout DOD. Using a stamp similar to the one employed by DOE (see fig. 1), which clearly states that the marked information may be exempt from public release under a specific FOIA exemption, should facilitate the practice. Furthermore, as DOD stated above, “all DOD information, whether marked as FOUO or not, is specifically reviewed for release when disclosure to the public is desired by the Department or requested by others. Any erroneous or improper designation as FOUO is identified and corrected in this review process and the information released as appropriate. Thus, information is not withheld from the public based solely on the initial markings applied by the originator.” Therefore, if DOD, under the FOIA process, properly reviews all documents before they are released and corrects any erroneous or improper designation, then prior markings should not affect the decision to release a document, particularly if such markings are identified as provisional. Therefore, we continue to believe our recommendation has merit. Comments from DOE’s Director, Office of Security and Safety Performance Assurance and DOD’s Deputy Under Secretary of Defense (Counterintelligence and Security) are reprinted in appendix I and appendix II, respectively. DOE and DOD also provided technical comments, which we included in the report as appropriate. As agreed with your offices unless you publicly release the contents of this report earlier, we plan no further distribution until 30 days from its date. We will then send copies of this report to the Secretary of Energy; the Secretary of Defense; the Director, Office of Management and Budget; and interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact either of us. Davi M. D’Agostino can be reached at (202) 512-5431 or [email protected], and Gene Aloise can be reached at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In addition to the contacts named above, Ann Borseth and Ned Woodward, Assistant Directors; Nancy Crothers; Doreen Feldman; Mattias Fenton; Adam Hatton; David Keefer; William Lanouette; Gregory Marchand; David Mayfield; James Reid; Marc Schwartz; Kevin Tarmann; Cheryl Weissman; and Jena Whitley made key contributions to this report.
|
In the interest of national security and personal privacy and for other reasons, federal agencies place dissemination restrictions on information that is unclassified yet still sensitive. The Department of Energy (DOE) and the Department of Defense (DOD) have both issued policy guidance on how and when to protect sensitive information. DOE marks documents with this information as Official Use Only (OUO) while DOD uses the designation For Official Use Only (FOUO). GAO was asked to (1) identify and assess the policies, procedures, and criteria DOE and DOD employ to manage OUO and FOUO information and (2) determine the extent to which DOE's and DOD's training and oversight programs assure that information is identified, marked, and protected according to established criteria. Both DOE and DOD base their programs on the premise that information designated as OUO or FOUO must (1) have the potential to cause foreseeable harm to governmental, commercial, or private interests if disseminated to the public or persons who do not need the information to perform their jobs and (2) fall under at least one of eight Freedom of Information Act (FOIA) exemptions. According to GAO's Standards for Internal Control in the Federal Government, policies, procedures, techniques, and mechanisms should be in place to manage agency activities. However, while DOE and DOD have policies in place, our analysis of these policies showed a lack of clarity in key areas that could allow for inconsistencies and errors. For example, it is unclear which DOD office is responsible for the FOUO program, and whether personnel designating a document as FOUO should note the FOIA exemption used as the basis for the designation on the document. Also, both DOE's and DOD's policies are unclear regarding at what point a document should be marked as OUO or FOUO and what would be an inappropriate use of the OUO or FOUO designation. For example, OUO or FOUO designations should not be used to cover up agency mismanagement. In our view, this lack of clarity exists in both DOE and DOD because the agencies have put greater emphasis on managing classified information, which is more sensitive than OUO or FOUO. While both DOE and DOD offer training on their OUO and FOUO policies, neither DOE nor DOD has an agencywide requirement that employees be trained before they designate documents as OUO or FOUO. Moreover, neither agency conducts oversight to assure that information is appropriately identified and marked as OUO or FOUO. According to Standards for Internal Control in the Federal Government, training and oversight are important elements in creating a good internal control program. DOE and DOD officials told us that limited resources, and in the case of DOE, the newness of the program, have contributed to the lack of training requirements and oversight. Nonetheless, the lack of training requirements and oversight of the OUO and FOUO programs leave DOE and DOD officials unable to assure that OUO and FOUO documents are marked and handled in a manner consistent with agency policies and may result in inconsistencies and errors in the application of the programs.
|
The FECA program covers over 2.7 million civilian federal and postal employees in more than 70 agencies, providing wage-loss compensation and payments for medical treatment to employees injured while performing their federal duties. FECA claims are initially received at the employing agency, then forwarded to Labor’s OWCP where eligibility and payment decisions are made. Every year, employing agencies reimburse OWCP for the amounts paid to their employees in FECA compensation during the previous year. Certain government corporations and USPS also make payments to Labor for program administrative fees. Figure 1 displays the standard process for FECA claims reviews and payments by OWCP. OWCP is the central point where FECA claims are processed and eligibility and benefit decisions are made. Claims examiners at OWCP’s 12 FECA district offices determine applicants’ eligibility for FECA benefits and process claims for wage-loss payments. FECA laws and regulations specify complex criteria for computing compensation payments. Using information provided by the employing agency and the claimant on a claims form, OWCP calculates compensation based on a number of factors, including the claimant’s rate of pay, the claimant’s marital status, and whether or not the claimant has dependents. In addition, claimants cannot receive FECA benefits at the same time they receive certain other federal disability or retirement benefits, or must have benefits reduced to eliminate duplicate payments. For example, Social Security Administration (SSA) disability benefits are reduced if an individual is also receiving FECA payments. According to OWCP officials, initial claims received from employing agencies are reviewed to assess the existence of key elements. The elements include evidence that the claim was filed within FECA’s statutory time requirements, that the employee was, at the time of injury, disease, or death, an employee of the United States, and that the employee was injured while on duty, and that the condition resulted from the work-related injury. If the key elements are in place, OWCP will approve a claim and begin processing reimbursements for medical costs. After initial claim approval, additional reviews are done while a claim remains active if the claim exceeds certain dollar thresholds. Once a claim is approved, payments are sent directly to the claimant or provider. An employee can continue to receive compensation for as long as medical evidence shows that the employee is totally or partially disabled and that the disability is related to the accepted injury or condition. OWCP considers claimants who are not expected to return to work within 3 months to be on its periodic rolls for payment purposes. OWCP officials review medical evidence annually for claimants on total disability receiving long-term compensation who are on the program’s periodic rolls, and every 3 years for claimants on the periodic rolls who have been determined to not have any wage-earning capacity. Claimants are also required to submit an annual form (CA-1032) stating whether their income or dependent status has changed. The form must be signed to acknowledge evidence of benefit eligibility and to acknowledge that criminal prosecution may result if deliberate falsehood is provided. If questions arise about medical evidence submitted by the claimant, OWCP can request a second medical examination be performed by a physician of its choosing. We have identified several promising practices that employing agencies and Labor have implemented that may help to reduce fraudulent FECA claims. We are planning to look further into these practices as part of our ongoing work. Three employing agencies informed us that they employed dedicated, full-time FECA program staff including injury compensation specialists and other staff, which, according to officials, helps staff gain program knowledge and expertise. It also allows program staff to specialize in FECA claims and reviews without having to perform additional duties. Agencies with full time staff may be able to dedicate resources to training them in fraud prevention, which is a positive practice noted in GAO’s fraud-prevention framework. GAO’s Standards for Internal Control in the Federal Governmentcompetent personnel are a key element to an effective control also specifically mentions that appropriate, environment. Officials from one employing agency with this structure stated that having dedicated and experienced FECA staff allows them to conduct more aggressive monitoring of long-term workers’ compensation cases. Labor officials agreed that agencies that can devote dedicated full time resources are positioned better to manage the program. Examples include the following: FECA staff in one Navy region reported having an average of 15 years of program experience, which they said helps them to identify specific indicators of potential fraud. According to the Air Force, it has specific teams that specialize in reviewing FECA claims at different phases of the claims process. USPS officials also stated they assign staff full time to manage FECA cases. In addition, in 2008, we recommended that the Secretary of Labor direct OWCP to take steps to focus attention on the recovery of FECA overpayments, such as determining whether having fiscal staff dedicated to recovering overpayments would increase its recovery.that it carefully evaluated having fiscal staff dedicated to recovering overpayments. However, given the integral involvement of claims examiners in overpayment processing, the unavailability of fiscal staff to undertake this specialized activity, and expected continued budget constraints, Labor believes that keeping this function with claims examiners is the most cost-effective debt-collection strategy. Officials at five employing agencies and Labor have instituted periodic reviews of active FECA claims, which may improve overall program controls. Specifically, several agencies reported that annual reviews of FECA case files were used to help increase program officials’ awareness of potential fraudulent activities. These controls fall within the detection and monitoring component of GAO’s fraud-prevention framework and could help to validate claimants’ stated medical conditions, income information, and dependent information. GAO’s Standards for Internal Control in the Federal Government also states that monitoring activities, such as comparisons of different data sets to one another, can help to encourage continued compliance with applicable laws and regulations. Agency officials stated that these types of reviews assist with identifying claimants who are not eligible to continue to receive FECA benefits. According to agency staff: Labor requires long-term claimants to submit updated claim documentation about wages earned and dependent status for annual reviews. While much of the information provided on the CA-1032 is self-reported, the requirement for annual submissions can help identify necessary changes to benefits. In addition, Labor officials stated they also perform regular medical-claim reviews depending on the status of a case. Staff at one Navy regional office send annual questionnaires to claimants to determine if information, including income and dependent status, is consistent with annual documentation submitted to Labor. A DHS component agency sends periodic letters to claimants asking about their current status. If DHS determines that action should be taken, DHS then sends a letter to Labor requesting the claim be closed. Under DOD policies, Air Force, Army, and Navy staff are required to conduct an annual review of selected long-term claim files and medical documentation to determine whether claimants are receiving compensation benefits they are entitled to and identify claimants who are fit to return to work. The Air Force has developed quarterly working groups to review all paid compensation benefits. USPS performs periodic reviews of claimant data. USPS IG officials identified a claimant who fraudulently claimed $190,000 in mileage reimbursements for travel to therapy almost every day for 5 years, including weekends and holidays. Officials from employing agencies and Labor stated that their program staff conducted data analysis, such as comparisons of mileage claims to medical bills, to verify information submitted by claimants. Agencies also reported using available data sources to verify whether claimants should continue to receive FECA benefits. Similar to the periodic reviews previously discussed, these controls fall within the monitoring component of GAO’s fraud-prevention framework and could help to validate claimants’ self-reported income and medical-condition information. Data sources reviewed ranged from federal agency data to other publicly available information. Agencies also conduct reviews of claimant physician and prescription drug payments to identify fraud. Specifically, according to agency officials: Labor gives each employing agency access to its Agency Query System (AQS), which allows agencies to electronically review information on FECA claims, including current claims status, wage- compensation payment details, and medical-reimbursement details. Labor officials also stated they provide at least quarterly, and for some employing agencies weekly, extracts from their data system that give employing agencies information on wage-compensation payments, medical-billing payments, and case-management data. The Navy reviews pharmacy bills, medical-diagnosis codes, and mileage-reimbursement details from the AQS system on a case-by- case basis to determine whether physician claims are related to the injury sustained by the claimant and to identify whether mileage for physician visits was reimbursed on days when the claimant did not visit a physician. Navy officials use publicly available state government information to identify claimants who owned and received income from their own businesses. For example, one public records search found that a FECA claimant was an active owner of a gentleman’s club while he was fraudulently receiving FECA wage-loss benefits. Officials from employing agencies and Labor stated that they reviewed SSA’s Death Master File periodically to identify benefits erroneously dispersed to deceased individuals’ survivors. Specifically, Labor said it conducts monthly data matches with SSA’s Death Master File records and plans to revise the forms used in survivors’ claims to gather Social Security numbers for survivors and beneficiaries, enabling Labor to match all FECA payees with SSA death records. VA has developed a process that allows the agency to track prescription drug usage claims and identify anomalies. Four employing agencies reported that using investigative resources by investigating potential fraud cases helped to increase program controls. The Navy FECA component has assigned responsibilities to staff that investigate and help prosecute fraudulent FECA claims, while the Air Force has designated staff that refers allegations to its Office of Special Investigations. USPS program officials reported that they refer potential fraud cases internally to USPS IG officials for investigation and prosecution. The investigation and effective prosecution of claimants fraudulently receiving benefits is a key element in GAO’s fraud-prevention framework. While these activities are often the most costly and least effective means of reducing fraud in a program, the deterrent value of prosecuting those who commit fraud sends the message that fraudulent claims will not be tolerated. Examples of the effective integration of investigative resources provided by these employing agencies include the following: The Air Force discussed its plan to hire staff in early fiscal year 2012 to conduct background investigations and surveillance of claimants to determine whether they are entitled to receive FECA benefits. The USPS IG reported that since October 2008 it identified and facilitated terminating benefits for 476 claimants who were committing workers’ compensation fraud, and recovered over $83 million in medical and disability judgments. Navy officials stated that their internal investigators’ work at one region led to 10 convictions from 2007 to 2011 and an $8.6 million cost-avoidance to the agency. One individual received monthly workers’ compensation payments after falsely denying that he had outside employment and outside income while claiming total disability that prevented him from working. Interviews with former employers uncovered that this claimant had been employed and been paid over $100,000 per year while he was receiving benefits. This individual was sentenced to 18 months in prison, 3 years supervised probation, and $302,380 in restitution for making a false statement to obtain FECA benefits. Another individual collected FECA benefits made out to his father for 4 years after his father was deceased. This individual was sentenced to 5 years of probation and full restitution in the amount of $53,410. DHS officials within the Transportation Security Administration stated they have successfully used an internal affairs unit consisting of seven staff members to examine and respond to fraud, waste, and abuse cases and make referrals to investigators. The investigators then conduct video surveillance and examine data to find potential fraud. A recent Labor IG testimony cited numerous Labor IG investigations that have been conducted over the years focusing on FECA claimants who work while continuing to receive benefits, and on medical or other service providers who bill the program for services not rendered. Our preliminary observations also identified potential vulnerabilities in the FECA program fraud-prevention controls that could increase the risk of claimants receiving benefits they are not entitled to. Again, we plan to examine these potential vulnerabilities as part of our ongoing work. We found that management of the FECA program could be affected by limited access to necessary data. Specifically, agency officials stated the program lacked proper coordination among federal agencies and that there was limited or no access to data sources that could help reduce duplicate payments. For example, Labor does not have authority to compare private or public wage data with FECA wage-loss compensation information to identify potential fraud. This prevents agencies from verifying key eligibility criteria submitted by claimants, such as income. GAO’s fraud-prevention framework emphasizes effective monitoring of continued compliance with program guidelines, and outlines how validating information with external data can assist with this process. Specific potential vulnerabilities identified in the area included the following: Program officials at Labor and the employing agencies do not have access to payroll information included in the National Directory of New Hires (NDNH) and federal employee payroll data, which could help reduce duplicate payments by identifying unreported income. In a previous report, we recommended that Labor develop a proposal seeking legislative authority to enter into a data-matching agreement with the Department of Health and Human Services (HHS) to identify FECA claimants who have earnings reported in the NDNH. However, Labor officials stated that they investigated using NDNH and communicated with HHS, but determined that this would not be an effective solution due to cost issues, limited participation by employers in the NDNH, and the likelihood that illegitimate earnings would not be listed. As an alternative, Labor recently provided testimony proposing legislative reforms to FECA that would enhance its ability to assist FECA beneficiaries. As part of this reform, OWCP sought authority to match Social Security wage data with FECA files. OWCP currently is required to ask each individual recipient to sign a voluntary release to obtain such wage information. According to Labor, direct authority would allow automated screening to assess whether claimants are receiving salary, pay, or remuneration prohibited by the statute or receiving an inappropriately high level of benefits. It would be important to assess whether access to Social Security wage data is an effective alternative to access to NDNH data, and we plan to assess this as part of our ongoing work. Navy and Air Force officials cited difficulty coordinating with VA to determine whether individuals are receiving disability benefits for the same conditions related to FECA claims. This information is key for employing agencies to assess whether claimants received duplicate benefits for the same injuries under both VA disability benefits and FECA benefits. VA commented that privacy concerns related to providing beneficiary data to external agencies has affected coordination. An employing agency official stated that Labor does not provide them with remote access to the claimant’s annual certification form CA- 1032, which would be useful for their periodic review efforts. However, Labor does allow employing agency officials to view the CA-1032 forms if the officials come to a Labor district office. The CA-1032 form contains information on a claimant’s income and dependent status, which is useful when employing agencies review claims files for continued eligibility. We raise this issue because, as stated above, the Navy utilizes information submitted to Labor as part of its periodic review efforts. A 2010 SSA IG audit found individuals receiving duplicate benefits for SSA and FECA. According to the SSA IG, development of a computer-matching agreement with Labor and its FECA payments database would allow SSA to reduce the number of duplicate SSA payments by verifying the accuracy of payment eligibility. According to the SSA IG report, the agreement has not been finalized with Labor due to changes in personnel at SSA. Our preliminary observations identified program processes that relied heavily on data self-reported by claimants that is not always verified by agency officials. Not verifying information concerning wages earned and dependent status reported by claimants creates potential vulnerabilities within the program. For example, individuals who are working can self- certify that they have no other income, and continue to remain on the program while their statements are not verified. Prior reports by us and Labor’s IG have shown that relying on claimant-reported data could lead to overpayments. For example: A 2008 GAO report found that Labor relied on unverified, self-reported information from claimants that was not always timely or correct.Specifically, the annual CA-1032 forms submitted to Labor to determine whether a beneficiary is entitled to continue receiving benefits relies on statements made by the claimant that are not verified. A 2007 Labor IG report also found that an OWCP district office did not consistently ensure that claimants returned their annual form CA-1032 or adjust benefits when the information reported by claimants indicated a change in their eligibility. Labor agreed with the findings of this report. During fiscal year 2004, claimants and beneficiaries continued to receive compensation payments even though they had not provided required timely evidence of continuing eligibility. In one case, the claimant’s augmented payment rate was not reduced even though the claimant reported that his spouse was no longer a dependent. According to Labor officials, a new case-management system was deployed after the Labor IG audit field work was conducted, which addresses some of the issues raised in the Labor IG report. Our preliminary observations found that FECA program regulations allow claimants to select their own physician, and also requires examination by a physician employed or selected by the government only when a second opinion is deemed necessary by the government. We found this could result in essential processes within the FECA program operating without reviews by physicians selected by the government. This potential vulnerability affects key control processes outlined in GAO’s fraud- prevention framework in two areas: first, the lack of reviews when assessing validity of initial claims and second, the lack of the same when monitoring the duration of the injury. However, the addition of a government physician into the process does not necessarily mitigate all risks, and costs associated with additional medical reviews would need to be considered. For example, there may be difficulties in successfully obtaining information from physicians representing the government’s interest. Specifically, a prior GAO report found challenges in obtaining sound or thorough evidence from physicians approved by Labor in Black Lung Benefits Program claims for miners. Our report also noted that physicians stated that guidance provided by Labor for effectively and completely documenting their medical opinions was not clear, which resulted in the challenges in providing useful information to Labor concerning Black Lung claims. Details of this potential vulnerability include the following: Labor, not the claimant’s employing agency, determines if a second opinion is necessary. Employing-agency officials, including officials from DHS and USPS stated that there have been instances where Labor failed to respond to their requests to have a second-opinion examination performed at the employing agencies’ request even though the costs would be borne by their agencies. We did not verify these claims. Labor officials stated that its claims examiners are trained to review files and make the appropriate case-management decision on the need for a second opinion. In addition, they stated that resources associated with second opinions include significant time and effort for a claims examiner to review a file, document the need for a second opinion, and determine the specific issues to be reviewed by the physician. Finally, Labor officials noted that numerous requests by employing agencies for second opinions can put a strain on the limited number of physician staff it uses for these examinations. Officials at multiple employing agencies covered in our work to date stated that they faced difficulties successfully investigating and prosecuting fraud. GAO’s fraud-prevention framework states that targeted investigations and prosecutions, though costly and resource intensive, can help deter future fraud and ultimately save money. We plan to follow up with agency IG and United States Attorney officials to gain their perspective on FECA fraud cases as part of our ongoing work. Details offered by employing-agency program officials included the following: Officials at DOD stated that their investigative units do not normally invest resources in FECA fraud cases because national defense, antiterrorism, and violent crimes cases are higher priorities. USPS officials also stated that, in their experience, limited resources at United States Attorneys offices means that those attorneys will often not prosecute cases with an alleged fraud of less than $100,000. According to these officials, many of their strong allegations of fraud and abuse fall below this amount when estimating the cost of fraud that has already occurred. In addition to the challenges noted above related to fraud investigations, in 2008, we recommended that OWCP take steps to focus attention on recovering FECA overpayments. Specifically, we recommended considering reducing the dollar threshold for waiving overpayments as OWCP’s overpayment processing data system develops additional capabilities. With respect to reducing the waiver threshold, Labor declined to consider reducing the dollar threshold while their current processing data system was developing additional capabilities to recover overpayments. We plan to follow up on these promising practices and potential weaknesses as part of our ongoing review of FECA fraud-prevention controls. We will also determine whether duplication of benefits and other problems within the FECA program may have contributed to specific cases of fraud and abuse or other program vulnerabilities and develop illustrative case studies as appropriate. To complete this work, we have attempted to obtain access to NDNH data. However, HHS has denied access to the NDNH database because they assert that we do not have authority to obtain NDNH data, despite the fact that we have a broad right of access to all federal agency records.slowed the progress of this engagement reviewing federal beneficiary fraud and abuse and has limited our ability to assess the potential vulnerability of the FECA program to fraud and abuse at a national level. Although we have been able to obtain some of the data from a number of states, we have not received complete data from all states contacted. Legislation that is currently pending in the House and Senate (H.R. 2146, S. 237) would refute HHS’ erroneous interpretation of our statutory access rights and would ensure that we have access to the NDNH and can complete our congressionally requested work in a timely manner. HHS’ denial of access has In addition to our fraud-prevention work in the FECA program, we are conducting two other program-related engagements. Those engagements focus largely on issues related to retirement-age FECA beneficiaries. The results of that work will also be reported separately. On November 9, 2011, we issued a statement for the record to the Senate Committee on Homeland Security and Governmental Affairs detailing our preliminary observations on FECA fraud prevention controls. At that time, we discussed our key findings with Labor and officials at the six employing agencies. Labor and the employing agencies generally agreed with the preliminary findings and provided technical comments, which were incorporated into the statement. Those findings and associated technical comments are included in this report. We are sending copies of this report to the Secretaries of Labor, Defense, Homeland Security, Veterans Affairs, the Postmaster General, and interested congressional committees. In addition, this report is also available at no charge on the GAO website at http://www.gao.gov. If you have any questions concerning this report, please contact Gregory D. Kutz at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report.
|
According to the Department of Labor (Labor), in fiscal year 2010 about 251,000 federal and postal employees and their survivors received wage-loss compensation, medical and vocational rehabilitation services, and death benefits through the Federal Employees Compensation Act (FECA) program. Administered by Labor, the FECA program provides benefits to federal employees who sustained injuries or illnesses while performing their federal duties. Employees must submit claims to their employing agency, which are then reviewed by Labor. For those claims that are approved, employing agencies reimburse Labor for payments made to their employees, while Labor bears most of the programs administrative costs. Wage-loss benefits for eligible workersincluding those who are at, or older than, retirement agewith total disabilities are generally 66.67 percent of the workers salary (with no spouse or dependent) or 75 percent for a worker with a spouse or dependent. FECA wage-loss compensation benefits are tax free and not subject to time or age limits. Labors Office of Workers Compensation Programs (OWCP) estimated that future actuarial liabilities for governmentwide FECA compensation payments to those receiving benefits as of fiscal year 2011 would total nearly $30 billion (this amount does not include any costs for workers added to the FECA rolls in future years). In 2010, the United States Postal Service (USPS) Inspector General (IG) reported that USPS alone had more than $12 billion of the $30 billion in estimated actuarial FECA liabilities. In April 2011, the USPS IG testified that USPS had removed 476 claimants from the program based on disability fraud since October 2008 and recovered more than $83 million in judgments. Given the significant projected outlays of the governmentwide FECA program and prior USPS IG findings of fraud, Congress asked us to provide preliminary observations on our ongoing work examining FECA fraud-prevention controls and discuss related prior work conducted by us and other federal agencies. Our work to this point has identified several promising practices that could help to reduce the risk of fraud within the FECA program. The promising practices link back to fraud-prevention concepts contained in GAOs Fraud Prevention Framework and Standards for Internal Control in the Federal Government, and include agencies use of full-time staff dedicated to the FECA program, periodic reviews of claimants continued eligibility, data analysis for potential fraud indicators, and effective use of investigative resources. These promising practices have already resulted in successful investigations and prosecutions of FECA-related fraud at some agencies, and could help to further enhance the programs fraud-prevention controls. However, our preliminary work has also identified several potential vulnerabilities in the programs design and controls that could increase the risk for fraud. Specifically, we found that limited access to necessary data is potentially reducing agencies ability to effectively monitor claims and wage-loss information. In addition, agencies reliance on self-reported data related to wages and dependent status, lack of a physician selected by the government throughout the process, and difficulties associated with successful investigations and prosecutions all potentially reduce the programs ability to prevent and detect fraudulent activity. Labor and employing agencies generally agreed with the preliminary findings presented in this report and provided technical comments, which were incorporated into this report. We plan to follow up on the promising practices and potential vulnerabilities as part of our ongoing work, although our progress has been slowed by difficulties in accessing certain databases.
|
A successful census is critical because, as required by the Constitution, census data are used to reapportion seats in the House of Representatives. In addition, every year, the government awards around $180 billion in federal funds to localities on the basis of census numbers, and states use census data, among other purposes, to redraw the boundaries of congressional districts. Businesses and private citizens also depend on census data for such purposes as marketing and planning. Census Day is April 1, 2000, with peak efforts to follow up on nonresponding households scheduled to run from April 27 to July 7, 2000. Population counts to be used to reapportion seats in the House of Representatives are to be delivered to the President by January 1, 2001. In 1998, the Bureau conducted a dress rehearsal for the 2000 Census during which it tested most of the procedures and operations planned for the decennial census under as near census-like conditions as possible. The dress rehearsal was the Bureau’s last opportunity for an operational test of its overall design of the 2000 Census and to demonstrate to Congress and other key stakeholders the feasibility of its plans. (Dress Rehearsal Census Day was April 18, 1998). The dress rehearsal sites included Sacramento, CA; 11 county governments and the City of Columbia, SC; and Menominee County, WI, including the Menominee American Indian Reservation. To review the Bureau’s efforts to increase public participation in the census and to collect timely and accurate field data from nonrespondents, we examined documents that described the Bureau’s budget, plans, procedures, progress, and evaluations relating to these operations. Further, we examined current laws, regulations, and legislation pertaining to staffing the Bureau’s field operations. We also interviewed Bureau officials at headquarters and, where applicable, regional and local census officials responsible for planning and implementing the 1998 dress rehearsal and the 2000 Census. To obtain a local perspective on the Bureau’s outreach and promotion and field follow-up efforts during the dress rehearsal, we made several site visits to the dress rehearsal jurisdictions and interviewed local officials who were responsible for organizing and implementing community outreach and promotion efforts. We also inspected each site for the scope and prominence of promotional material and activities, and observed nonresponse follow-up operations. To determine the dollar effect of a 1 percentage point decrease in the mail response rate, we reduced the assumed mail response rates used in the Bureau’s cost model supporting its fiscal year 2000 amended budget request. We did our audit work at the Bureau’s Census 2000 dress rehearsal sites; Regional Census Offices in Charlotte, NC, and Seattle, WA; Bureau headquarters in Suitland, MD; as well as in Washington, D.C., between April 1998 and October 1999, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Commerce. On December 3, 1999, the Secretary forwarded the Bureau’s written comments on the draft (see app. I), which we address at the end of this report. Achieving the Bureau’s Mail Response Rate Objective Will Be Difficult Outreach and Promotion Program May Have Only a Modest Impact on the Public participation is critical to a successful census because it helps improve the accuracy and completeness of census information, while reducing the Bureau’s costly and time-consuming nonresponse follow-up workload. The mail response rate to the census questionnaire is the most commonly used indicator of the level of public participation.Unfortunately, as shown in figure 1, the mail response rate has declined with each decennial census since the Bureau first initiated a national mailout/mailback approach in 1970. According to the Bureau, this declining trend is due, in part, to various demographic, attitudinal, and other factors, such as concerns over privacy and mistrust of government. These and related issues led us to note, in our 1992 summary assessment of the 1990 Census, the formidable challenges the Bureau faces in increasing public cooperation. We identified several opportunities for improvements in this regard. One of these opportunities was to simplify the census questionnaire—which the Bureau has successfully done. However, even with a simplified questionnaire and other changes in census design, we noted the Bureau needed to prepare for a lower mail response rate in 2000. For the 2000 Census, the Bureau is expecting a 61-percent mail response rate, which is 4 percentage points lower than what it achieved in 1990. However, the dress rehearsal results suggest that even this goal may be optimistic. First, while the Bureau generally achieved its mail response rate goals during the dress rehearsal, it did so only by mailing out a second, “replacement” questionnaire, which is an approach the Bureau has since dropped for 2000. Second, the Bureau’s outreach and promotion program does not appear to have bridged the gap that typically exists between raising awareness of the census on the one hand and motivating people to respond on the other. The significant difficulty in both raising public awareness and motivating people to mail back their questionnaires was demonstrated during the 1990 Census, when Bureau research showed that although 93 percent of the public reported being aware of the census, the mail response rate was just 65 percent. The dress rehearsal results raise concerns as to whether the Bureau can achieve its 61-percent mail response rate goal in 2000. As shown in table 1, the Bureau generally met its dress rehearsal mail response rate objectives, exceeding its goal by three percentage points in Sacramento and falling slightly short in South Carolina and Menominee. However, a key ingredient of these response rates was the Bureau’s use of a second, “replacement” questionnaire that was sent to all housing units located in mailout/mailback areas in South Carolina and Sacramento. The Bureau has since rejected this procedure because it concluded that the number of duplicate responses suggested that the second mailing confused the public. Bureau officials told us that a similar situation in 2000 could cause overwhelming processing problems. Recognizing the potential impact of not using a second mailing, the Bureau reduced its initial goal of a 67-percent mail response rate by 6 percentage points to 61 percent. However, the results of a subsequent Bureau study suggest that the second mailing during the dress rehearsal had an even greater impact on the mail response rate, and, as a result, the Bureau’s current 61-percent response rate objective could be optimistic. Although the impact of the second questionnaire is difficult to measure precisely, as shown in table 2, the Bureau estimates that the second questionnaire added between 8.2 and 15.8 percentage points to the South Carolina response rate, and between 7.5 and 14.4 percentage points to the Sacramento response rate. Thus, without the second mailing, it is likely that the Bureau would have fallen far short of its response rate goals—by at least 8.2 percentage points in South Carolina and 4.5 percentage points in Sacramento. Dress rehearsal mail response rates are not necessarily predictive of decennial response rates, which are higher because of the greater public and media attention that the actual census receives. Nevertheless, the dress rehearsal mail response rates provide a useful indication of what might occur during the actual census and, for 2000, raise concerns that the Bureau is at risk of an even lower response rate than it had estimated. To help combat the downward trend in response rates, the Bureau has instituted both a national and locally based outreach and promotion program. Two key components of the Bureau’s outreach and promotion program include a paid advertising campaign and partnerships with local governments. In October 1997, the Bureau hired a consortium of private-sector advertising agencies, led by Young & Rubicam, to develop an extensive paid advertising program for the 2000 Census. Marketing the census represents a particular challenge in that advertisers typically target their best prospects and specific segments of the population. In contrast, census advertising is aimed at the most resistant “customers” and every U.S. household. The Bureau estimates it will spend about $167 million on the paid advertising campaign in fiscal years 1998 through 2000, of which $102.8 million (62 percent) has been allocated in fiscal years 1999 and 2000 for media (television, radio, print, and other types of advertising). A substantial portion of the advertising is to be directed at minority groups. Through the end of fiscal year 1999, for example, of the $16.4 million allocated for media purchases, about $7.3 million (45 percent) was to be used to target specific race and ethnic groups (see fig. 2). The Bureau has not yet purchased advertising for 2000, although a similar spending pattern is likely. According to Bureau officials, the paid advertising campaign is intended to motivate people to return their census forms by using a variety of media to stress the message that participating in the census benefits one’s community. We observed this during our visits to the dress rehearsal sites where we often saw billboards containing such taglines as “This is Your Future. Don’t Leave It Blank,” “The Future Takes Just a Few Minutes to Complete,” and “Pave a Road With These Tools” (see fig. 3). The census was also publicized through broadcast and print media and promotional items, such as cups and T-shirts. Nevertheless, the effectiveness of the paid advertising campaign appears to have been limited during the dress rehearsal. An independent research firm, which the Bureau hired to evaluate the effectiveness of the advertising campaign, reported that the campaign generally had no more than a “modest” impact on the public’s attitudes and knowledge of the census. Although the Bureau had expected a 30 percentage point increase in awareness at the South Carolina and Sacramento dress rehearsal sites, the evaluation results indicated that there already was a high level of census awareness among all demographic groups before the start of the advertising campaign. In a telephone survey of residents that was conducted before the advertising campaign, 86 percent of those responding in Sacramento and 93 percent of South Carolina respondents said that they had heard of the census. (The Bureau believes that events occurring before the start of the advertising campaign, such as news coverage about the census approach, among other factors, may have contributed to the high level of awareness observed before the start of the advertising campaign). Following the advertising campaign, awareness levels increased by 8 percentage points in Sacramento and 5 percentage points in South Carolina (minority groups, the less educated, and the less affluent experienced a greater increase in awareness). Significantly, much like the 1990 Census, the public’s high level of awareness was not matched by similarly high mail response rates. As previously discussed, at the South Carolina and Sacramento dress rehearsal sites, the mail response rates were about 55 percent. Following the dress rehearsal, the Bureau expanded and enhanced the paid advertising campaign, in part by adding messages that are to run prior to, and following Census Day. The campaign, which is to run from November 1999 through late-May 2000, is divided into three phases: educational, motivational, and nonresponse follow-up. The phases will be similar in that they will all contain messages about census benefits and confidentiality. In addition, the motivational phase, timed to coincide with the census questionnaire mailings, is to let people know to expect the census form and to mail it back. The nonresponse follow-up phase, which is to occur when the Bureau is going door-to-door collecting data from nonrespondents, is to encourage people to cooperate with census enumerators. Still, the impact that this additional advertising might have on people’s willingness to respond to the census is difficult to gauge. According to the Bureau, there did not appear to be a direct relationship between advertising exposure during the dress rehearsal and the likelihood of returning a census form. However, the Bureau suspects that the campaign had an “indirect effect” on public response to the census in that the campaign may have made people expect the census form in the mail, which, in turn, increased the likelihood that they would return it. Moreover, as noted earlier, even though the advertising campaign for 2000 has been greatly enhanced since the dress rehearsal, high levels of awareness do not guarantee high mail response rates. In addition to the paid advertising campaign, the Bureau is seeking to form partnerships with local governments, community groups, businesses and nongovernmental organizations to promote the census on a grassroots basis. The Bureau has allocated $108 million for its partnership initiatives in fiscal years 1999 and 2000. A key element of the Bureau’s local partnership effort will be Complete Count Committees, which are to consist of local government, religious, media, education, and other community leaders. The committees are to promote the census by sponsoring promotional events, placing articles in local newspapers, and holding press conferences that convey the importance of the census, among other activities. For 2000, as a matter of long-standing policy, the Bureau is not directly funding local outreach and promotion activities. Instead, for fiscal years 1999 and 2000, the Bureau is to distribute about $1.2 million to each of the Bureau’s 12 Regional Census Centers for in-kind services, such as printing handouts. The Bureau also plans on assigning employees, known as partnership specialists, to work with local groups to help them initiate and sustain grassroots marketing activities, such as the Complete Count Committees. The Complete Count Committee program stems from the Bureau’s recognition that the paid advertising campaign alone will not get the message across to everybody—particularly the hard-to-count—that participating in the census is important. The Bureau hopes that local people who are trusted by members of the community can more effectively market the census to those who are difficult to convince through traditional advertising media. Thus, while the Bureau plans on partnering with a number of religious, service, community, and other organizations— often to increase census participation among certain groups or areas—the Bureau believes that Complete Count Committees are the key to making each and every community aware of the census and persuading everyone to respond. However, during the dress rehearsal, we found that the effectiveness of the Complete Count Committee program was undermined by an apparent mismatch between the Bureau’s expectations of the committees and what the committees could realistically accomplish with their limited resources. While the Bureau expected local governments to plan and execute an outreach and promotion program largely on their own with minimal direct support from the Bureau, we found that many local governments lacked the money, people, and/or expertise to launch an adequate marketing effort during the dress rehearsal. If such expectations remain misaligned for 2000, these disappointing results could continue. Regarding money, officials representing 9 of the 14 local governments participating in the dress rehearsal told us that they were unable or unwilling to fund promotional activities. For example, while the Sacramento committee initially developed a list of several dozen promotional activities involving local media and other organizations, a committee representative told us that many activities were dropped because of a lack of money. Although the Bureau encouraged committees to turn to local businesses for support, the committees (1) were generally too small to organize an effective outreach effort or (2) viewed such an effort as a federal function. As one South Carolina committee representative said, “Fundraising for the federal government doesn’t go over well…. That’s what taxes are for.” The Bureau may also have overly optimistic expectations of the level of staff and expertise available at the local level to plan and implement outreach and promotion activities. This was evident during the dress rehearsal where some local governments had difficulty getting staff to volunteer to help plan and organize promotion activities. At the South Carolina and Menominee sites, for example, some local officials expressed frustration and others resentment for what they perceived as the burden of promoting the census and the time it was taking from their other responsibilities. In addition, local governments may lack the know-how to launch an effective marketing effort. During the dress rehearsal, for example, the Bureau’s South Carolina partnership specialist said that the Bureau assumed that the South Carolina counties had the experience and knowledge to market the census. However, she noted that, in hindsight, the opposite was often the case in those counties. In addition, while the Bureau’s partnership specialists are to provide needed expertise and assistance to local governments and other groups, the dress rehearsal suggested that these specialists may be spread too thin to offer meaningful support in 2000. In our past work, we reported that some South Carolina committees never formed, while others became inactive, partly because the Bureau’s two partnership specialists were responsible for assisting 11 county governments and the City of Columbia—a geographic area covering more than 6,700 square miles. In 2000, the partnership specialists will likely have a far greater workload. The Bureau plans to fill 542 partnership specialist positions to assist local governments. According to the Bureau, as of the end of July 1999, about 6,800 local governments had formed Complete Count Committees, (including 50 of the 51 largest cities). The Bureau expects that as many as 8,000 committees will ultimately be formed. Thus, on average, each partnership specialist could be responsible for assisting between 13 and 15 local governments. By comparison, the problems we observed at the South Carolina dress rehearsal site occurred when each partnership specialist was responsible for assisting an average of six local entities. Further, as local governments have been forming Complete Count Committees for the 2000 Census, early indicators suggest that the potential impact of this program may not be fully realized. For example, the effectiveness of the Complete Count Committee program will be partly determined by the number of governments that decide to participate. In the spring of 1998, the Bureau formally invited all 39,000 local and tribal governments in the United States to establish such committees. Although the Bureau did not expect that all 39,000 local governments would do so, the 6,800 committees formed so far represent about 40.4 million people— or only 16 percent of the U.S. population. Moreover, those local governments that do not form Complete Count Committees could add to the partnership specialists’ workload, because the partnership specialists will need to develop some other method of publicizing the census in those locations. Operational Challenges Could Undermine Nonresponse Follow-up Efforts The Bureau May Be Challenged to Meet Field Staffing Goals Post-Census Day Coverage Improvement Initiatives Offer Little Hope of The Bureau implements a nationwide field follow-up operation in an attempt to count those individuals who did not mail back their census questionnaires. Specific activities include (1) nonresponse follow-up, during which temporary Bureau employees, known as enumerators, visit and collect census information from each nonresponding housing unit and (2) additional coverage improvement initiatives, which are aimed at collecting data from people missed during the initial enumeration and nonresponse follow-up. According to the Bureau, even if it achieves its anticipated 61-percent mail response rate, enumerators will need to follow up with 46 million nonresponding housing units. However, completing this workload in the 10-week time period the Bureau has allotted for nonresponse follow-up, without compromising data quality, could prove extremely difficult. During the 1990 Census, for example, field follow-up operations proved to be error-prone and costly, in part because a higher than expected nonresponse follow-up workload required the Bureau to hire more enumerators than originally anticipated. However, some local census offices could not meet the demand for additional enumerators, which delayed the completion of nonresponse follow-up. As the time spent on data collection dragged on, the rate of errors appeared to increase because people moved or could not recall who had been residing at their home on Census Day. Furthermore, to complete nonresponse follow-up, enumerators collected data from secondhand sources, such as neighbors and mail carriers—referred to as “proxy” data. However, the Bureau—on the basis of its work evaluating past census operations—has found that proxy data are not as reliable as data obtained directly from household residents. In addition, field follow-up operations are expensive. The Bureau estimates that, in 2000, the cost to enumerate a household that mails back the census questionnaire will be about $3. For those households that do not return a questionnaire—requiring enumerators to obtain the information—costs could be as high as $35 per questionnaire. The combined challenges that affected the success of the Bureau’s 1990 nonresponse follow-up operations—completing nonresponse follow-up on time, maintaining data quality, and recruiting a sufficient number of enumerators—may pose similar, if not greater, challenges for the Bureau in 2000. For the 2000 Census, the Bureau has based its $1.5 billion nonresponse follow-up budget on the assumption that it will achieve a 61-percent mail response rate, which corresponds to a follow-up workload of about 46 million of the 119 million housing units estimated to comprise the nation. However, given the Bureau’s experiences during past censuses, it will be challenged to complete the nonresponse follow-up workload on time and minimize the collection of proxy data. For example, during the 1990 Census, because of unanticipated workload, staffing, and scheduling problems, it took the Bureau 14 weeks to complete nonresponse follow-up on 34 million housing units—8 more weeks than the 6-week period that the Bureau initially estimated for that operation. For 2000, the Bureau has scheduled 10 weeks to follow up on an expected 46 million housing units. Under this timetable, the Bureau has 4 weeks less time to follow up on 12 million more households, when compared to 1990 (see fig. 4). Thus, to follow up on 46 million households within the 10-week time frame, the Bureau will need to complete over 657,000 cases each day for the entire 10-week period. In addition, the Bureau’s quality assurance procedures, which call for enumerators to revisit certain households to identify and correct enumeration errors, will add more than 17,000 households to the Bureau’s average daily workload. Maintaining this pace could prove difficult for a variety of factors that range from the availability of a productive, temporary workforce, to local weather conditions. According to senior Bureau officials, a mail response rate as little as 2 or 3 percentage points less than the Bureau’s 61-percent goal could cause serious problems. For example, according to Bureau officials, the Bureau has a limited number of needed materials for nonresponse follow-up. Furthermore, while the amount added to total field data collection costs as a result of any increased workload will ultimately depend on where this workload is located and how the Bureau manages its resources in completing this workload, additional costs could, nonetheless, be substantial. Each percentage point drop in the mail response rate would increase the nonresponse follow-up workload by about 1.2 million households. In 1995, the Bureau estimated that a 1 percentage point increase in workload could add approximately $25 million to the cost of the census. However, on the basis our analysis of fiscal year 2000 Bureau budget estimates, we project that a 1 percentage point increase in workload could add at least $34 million in direct salary, benefits, and travel costs to the $1.5 billion budgeted for nonresponse follow-up. This $34 million in direct costs exclude, for example, indirect costs for headquarters and field support personnel, quality control operations, rent, and data processing, which may or may not be incurred. The Bureau’s ability to absorb these additional costs in its fiscal year 2000 budget will be a function of the actual outcome of other assumptions, such as enumerator productivity, and the Bureau’s ability to manage other uncertainties. Of course, a higher than expected mail response rate is possible and could result in significant savings, which the Bureau said it would use to augment its coverage improvement programs for hard-to-count populations. Completing the nonresponse follow-up workload in a timely manner will be critical to the Bureau’s collection of quality field data in 2000. According to Bureau officials, the Bureau does not plan to extend the nonresponse follow-up schedule as it did during the 1990 Census. They noted that the Bureau must meet the 10-week nonresponse follow-up schedule to have time to complete other census operations, including the coverage evaluations that will be used to estimate census under- and over-counts, and the processing and preparation of census data for publication. However, during the 1990 Census, enumerators collected proxy data before they had collected data from 95 percent of the cases within each predefined housing district, as required by Bureau procedures. Indeed, about 36 percent of the Bureau’s district offices used proxy data when the caseload completion level was 90 percent or less. Just 16 percent of the district offices began collecting proxy data when the caseload was 95 percent or more complete. During the dress rehearsal, while nonresponse follow-up operations were completed on schedule in both Menominee and Sacramento, and 6 days ahead of schedule in South Carolina, the Bureau found that obtaining interviews with household members proved to be more difficult than it had anticipated. As a result, the Bureau relied more heavily on proxy data than it had planned. Although the Bureau hoped to limit the portion of the nonresponse follow-up universe that was proxy data to less than 6 percent, the Bureau did not achieve this objective at any of the three dress rehearsal sites. In Sacramento, 20.1 percent of the occupied nonresponse follow-up universe was proxy data; in South Carolina, the proportion was 16.4 percent, and in Menominee, it was 11.5 percent. Because of the comparatively high use of proxy data, a Bureau evaluation of the dress rehearsal nonresponse follow-up operation noted that data quality, especially for the long-form questionnaire—which was somewhat more likely than the short form to be enumerated via proxy—was a concern. According to the evaluation, the data obtained from the long- form questionnaire are “especially suspect when obtained from a non- household member.” A number of questions also surround the Bureau’s ability to staff its nonresponse follow-up operations, which also has implications for timely and accurate field data collection. For example, while the Bureau was generally successful in staffing its dress rehearsal and initial census operations and kept turnover rates to manageable levels, the larger number of people the Bureau needs to hire in 2000, combined with a tight labor market and other factors, could pose problems. The Bureau plans to fill about 860,000 positions for peak field operations, including 539,000 for nonresponse follow-up. To do this—on the basis of the anticipated workload and the fact that the vast majority of people offered a position may not accept a census job or may resign before work assignments are completed—the Bureau estimates it will need to recruit nearly 3.5 million applicants (a number roughly equivalent to the population of South Carolina). However, achieving this staffing goal will not be easy because the labor market has become increasingly tight. According to the Bureau, it took this factor into consideration in setting an assumed enumerator productivity rate of 1.03 households per hour, which is based on conservative senior management judgments. This assumed productivity rate represents a 20-percent reduction from an original assumption of 1.28 households per hour. Higher than expected productivity rates could reduce the Bureau’s staffing needs. Nevertheless, as we have reported in the past, staffing the census could still be difficult because census jobs tend to be temporary and do not offer benefits, such as health or life insurance, sick or annual leave, retirement plans, and childcare, and thus, may not be as attractive to applicants as other employment opportunities. While the Bureau’s recruiting initiatives appeared to be effective during the dress rehearsal and early operations for the 2000 Census, the Bureau still encountered pockets of problems in specific geographic locations. For example, when the Bureau was conducting field operations to build the address list for the 2000 Census in the resort areas of Michigan’s Upper Peninsula and Vail, CO, the Bureau was competing for workers during the seasonal vacation period. To help attract workers, the Bureau quickly responded by raising hourly wages. According to a Bureau official, the Bureau anticipates similar pockets of recruiting problems to occur during nonresponse follow-up operations in 2000. Thus, it will be important for the Bureau to monitor the progress of nonresponse follow-up and respond quickly so that it can attract needed staff. To expand the applicant pool, the Bureau plans, among other things, to (1) focus its recruiting efforts on employed individuals seeking additional jobs, retirees, and homemakers, among others; (2) develop partnerships with state, local, and tribal governments, community groups, and other organizations to assist in recruiting efforts; (3) expand its employment advertising; and (4) use a geographic pay scale to set wages at 65 to 75 percent of local prevailing wages (from about $8.25 to $21.50 per hour) to help make census jobs more competitive. The Bureau has also worked with other federal agencies to waive regulations and policies that restricted or financially discouraged certain groups of people from seeking census employment. As shown in table 3, the Department of Commerce has authorized the Bureau to bypass Commerce’s policy preference against hiring noncitizens. Also, the Department of Housing and Urban Development (HUD) waived regulations that could have reduced the benefits for heads of households receiving housing assistance because the regulations would have required census income to be included in the calculations used to determine program eligibility. To make census employment more attractive to former and current federal employees, the Office of Personnel Management (OPM) used its authority to allow federal and military retirees, as well as current federal workers, to work on the census without reducing their benefits or income. (The requirement that military retirees are to receive reduced annuities upon federal reemployment was repealed by P.L. 106-65, effective Oct. 1, 1999.) Regarding current federal employees, over 80 federal agencies employing over 2.4 million workers have authorized that their employees may hold second appointments with the Bureau. GAO, seeking to do its part to help ensure a successful census, is participating in this initiative. Although the precise impact these actions might have on census employment cannot be determined, the agencies’ actions could expand the potential census applicant pool by millions of people. During the 1990 Census, exemptions were made for recipients of public housing assistance and federal civilian and military retirees, and both actions helped expand the census applicant pool. According to the Bureau, about 20,000 federal and military retirees worked on the 1990 Census, which was 3.6 percent of the more than 550,000 people hired overall. Similarly, when the public housing assistance exemption was used in 1990, the Bureau found that it helped generate applicants in difficult-to-recruit areas, such as high crime, inner city areas, and Indian reservations. Congress is considering three pieces of legislation designed to improve the recruitment of temporary census workers. H.R. 683, S. 752, and S. 1588 are similar in that each would (1) exempt census income from calculations used to determine eligibility for, or the amounts payable under, any federal, state, or local program financed in whole or in part with federal funds and (2) provide a blanket exemption from income/annuity offset provisions for federal civilian annuitants and military retirees. As previously noted, this annuity offset requirement was repealed for military retirees by Public Law 106-65. The legislation, if enacted, would remove financial disincentives that could discourage a wide range of people from seeking census employment. They include recipients of Social Security, veterans healthcare, food stamp, Medicaid, and Temporary Assistance for Needy Families benefits, as well as federal and military retirees. The broadest of the measures—S. 1588—also includes financial incentives for volunteers who help with the census, namely: reimbursements for expenses, such as gasoline and food, and a program of undergraduate or graduate debt relief. If enacted, the legislation could make census employment more attractive to millions of people. However, the bills contain restrictions that limit their applicability. In addition, other statutory provisions exist that prohibit or create financial disincentives for certain groups of people who might be interested in census employment. Regarding the restrictions contained in the measures currently before Congress, the exemption in all three bills would not apply to federally funded program beneficiaries who were appointed to temporary census positions before January 1, 2000. This could discourage some people who had worked on early census-taking operations, such as address list development activities, from seeking further census employment. At the same time, the census applicant pool is not as large as it could be because provisions contained in current laws continue to prohibit or potentially discourage large groups of people from considering census jobs. Although our review was not exhaustive, and we did not comprehensively weigh the pros and cons of each option, we identified three large sources of potential applicants who might be interested in census employment were it not for these provisions. Active duty military personnel. The Census Act allows uniformed personnel to take census jobs to enumerate members of the uniformed services. However, active duty military personnel are generally not permitted to accept outside federal employment in the absence of specific statutory authority to do so. Thus, additional statutory authority would be needed to authorize military personnel to work on the census. Doing so could increase the potential census applicant pool by over 1 million individuals. Recipients of federal government voluntary separation incentive payments. Since the early 1990s, as part of an effort to restructure the federal government, the Department of Defense and, later, civilian federal agencies, have had the authority to offer voluntary separation incentive payments (also known as buyouts) of as much as $25,000 to eligible employees who left federal service. For nondefense agencies, Congress has authorized both governmentwide buyouts and over 15 agency-specific buyout programs. According to information provided by OPM, most of these buyout programs contain provisions that generally require buyout recipients to repay their buyout if they accept a federal job within 5 years of their separation date. According to Bureau officials, some buyout recipients have decided against census jobs because of the repayment requirement. Approximately 59,000 buyout recipients could potentially still be covered by these provisions during peak census field operations. Noncitizens from certain countries. Most federal agencies have historically been prohibited by statute from using their appropriated funds to employ noncitizens of the United States, with certain exceptions. A statutory exemption from this appropriations restriction (currently contained in section 605 of Public Law 106-58) exists that allows agencies to use appropriated funds to employ noncitizens in limited circumstances, such as for the temporary employment of translators or in the field service as a result of emergencies. According to Bureau officials, the Bureau has used this exemption to hire temporary workers in the past and is exploring its further use for the 2000 Census. Nevertheless, a broad statutory exemption from this appropriations restriction would make it easier for the Bureau to hire noncitizens from currently nonexempt countries, such as India, Pakistan, and Brazil. Many of these individuals could better enumerate members of their own community because some hard-to- enumerate foreign-born residents may feel more comfortable providing information to persons with whom they share a common cultural heritage. Given the Bureau’s past history of staffing problems, the magnitude of the Bureau’s staffing challenge for 2000, and the importance of an adequate workforce to the collection of timely and accurate census data, it will be important for the Bureau to have as large an applicant pool as possible from which to hire census workers. The Bureau’s post-census coverage improvement procedures planned for 2000, while designed to improve the census count, are similar to 1990 methods that had limited success. Bureau officials believe that these procedures represent the best the Bureau can reasonably do to enhance the accuracy of the census. However, these officials also said that they doubt that either the overall accuracy levels or differential undercount rates will show much improvement over 1990 levels because societal factors that led to a high undercount in 1990 are even more prevalent today. Congress directed the Bureau to begin preparing for a traditional census in November 1997. However, the Bureau, awaiting the Supreme Court's decision on the legality of sampling, did not complete plans for a traditional census until January 1999, when the Supreme Court ruled that the Census Act prohibited the use of statistical sampling for purposes of determining the population count used to apportion the House of Representatives. As a result, according to Bureau officials, there was no time to conduct additional research to estimate the effectiveness of new coverage improvement procedures and, therefore, too much risk to justify implementing them. Thus, for 2000, the Bureau will primarily use post- census coverage improvement procedures used in 1990, which added just 3.68 million persons (1.5 percent) to the population count. Nevertheless, coverage improvement programs in the 1990 Census were costly and yielded data of uneven quality. For example, the Bureau’s 1990 Recanvass program—where enumerators did a second, post-Census Day canvass of addresses in selected neighborhoods to look for missed housing units—added 139,000 housing units to the census, which was 0.1 percent of the total, according to Bureau data. However, while the $14.7 million program added 178,000 people to the decennial count, the Bureau later estimated that nearly 22 percent were added in error. Overall, Bureau officials have acknowledged that post-census coverage improvement programs are expensive and do not always produce expected or hoped for results. Thus, it is unlikely that the Bureau’s post-census coverage improvement programs—programs that had limited success in 1990—will address the overall and differential undercount in 2000. With less than 4 months until Census Day, the Bureau faces some significant risks that, taken together, continue to jeopardize the success of the 2000 Census. Securing an adequate level of public participation is a great challenge with implications for the size of the nonresponse follow-up workload. Having to complete an even greater nonresponse follow-up workload than anticipated, or difficulty in filling the number of enumerator positions that the Bureau estimates it will need for this operation, would have implications for scheduling as well as data quality. Because of these combined risks, the 2000 Census may be less accurate than 1990. Given the operational uncertainties surrounding public participation in the census and the Bureau’s field follow-up operations, it will be important for the Bureau to have contingency plans in place to mitigate the impact of a lower-than-expected response rate. Because of the little time remaining and the need for senior Bureau managers to devote the bulk of their attention to effective execution of the census plans already in place, such contingency plans will be most useful if they focus on the critical challenges and trade-offs that the Bureau will face—such as the need to balance schedule pressures with the need to protect data quality—if its response rate goals are not met. In addition, even though the Bureau has already taken steps to expand the census applicant pool, additional statutory measures could be needed, given the Bureau’s past history of staffing problems and the magnitude of the Bureau’s staffing challenge for 2000. To help expand the census applicant pool, Congress may wish to consider legislative actions to modify legal provisions that potentially discourage or prohibit specific groups of people from seeking census employment. Options could include: expediting its consideration of H.R. 683, S. 752, and S. 1588, which among other things, would remove financial disincentives that could discourage recipients of Social Security, veterans healthcare, food stamp, Medicaid, and Temporary Assistance for Needy Families benefits, as well as federal and military retirees from seeking census employment; allowing active duty military personnel to hold temporary census exempting former federal employees who received voluntary separation incentives (buyouts) from requirements to repay their buyout amount if they work on the census; and providing a statutory exemption from the appropriations restriction currently contained in section 605 of Public Law 106-58, for purposes of temporary census employment. Although we recognize that each of these options entails policy, budgetary, and implementation considerations that would need to be addressed by Congress, they represent an initial list of options that Congress could consider to help reduce the Bureau’s staffing burden. To help ensure an accurate and cost-effective census, we recommend that the Director, Bureau of the Census, develop a contingency plan of actions the Bureau can take to address the operational challenges that would result from a questionnaire mail response rate that is lower-than- anticipated. At a minimum, the Bureau’s plan should address the budgetary, scheduling, staffing, and other logistical implications of collecting data from a larger number of nonresponding households. The contingency plan should also include options and procedures to balance the pressure to meet census schedules with the need to limit the use of proxy data. The Bureau should share its plan with Congress and others to demonstrate its preparedness for collecting accurate census data in the event of lower-than-expected levels of public cooperation with the census. The Secretary of Commerce forwarded written comments from the Bureau of the Census on a draft of this report. Overall, the Bureau commented that the draft report conflicted with the intent of our earlier reports, which, according to the Bureau, concluded that there is little time to make final census design changes and to implement them, as Census Day approaches. The Bureau noted that early in the decade, it recognized the challenges to conducting a complete and thorough nonresponse follow-up operation, and it planned to address these challenges using statistical sampling to adjust census population counts. However, as we noted in our draft, in January 1999, the Supreme Court ruled that the Census Act prohibited the use of statistical sampling for purposes of determining the population count used to apportion the House of Representatives. Our draft also noted that according to the Bureau, there was insufficient time to develop, test, and implement new coverage improvement programs. The Bureau also said that our contention that it has not sufficiently planned for potential shortcomings in nonresponse follow-up or outreach and promotion operations appears to contradict our September 1999 report on the Bureau’s fiscal year 2000 amended budget request.According to the Bureau, in that report, we characterized the Bureau’s expectations of enumerator productivity during nonresponse follow-up, as well as the effectiveness of the advertising campaign, as being “generally conservative.” We revised the draft to include the language from our September report, which noted the Bureau’s assumed enumerator productivity rate is 1.03 households per hour. This new assumed productivity rate represents a 20- percent reduction from an original assumption of 1.28 households per hour and was primarily based on senior management judgments—which the Bureau acknowledged are very conservative—about factors such as the uncertainty of hiring a sufficient number of quality temporary workers in a tight labor market. We also have added language to this report noting that higher-than-expected productivity rates could reduce staffing needs. Our September report did not state that the Bureau’s expectations of the impact of its outreach and promotion campaign are conservative. Rather, the September report states that the Bureau had no data available to support how much, if any, the Bureau’s plans to increase the amount of census advertising would increase the response rate. Overall, the Bureau acknowledges that completing nonresponse follow-up on time, hiring and training needed staff, and implementing a successful outreach and promotion campaign will be a challenge—overriding themes of our report. The Bureau further commented that there is inconsistent analysis supporting the conclusions in our draft. According to the Bureau, in some instances, the dress rehearsal is used to support certain conclusions (e.g., the mail response rate in 2000 will be difficult to achieve). In other cases, the Bureau notes that conclusions are drawn that directly contradict the dress rehearsal findings. We disagree with the Bureau’s reading of the draft report. We were very careful in drawing the lessons from the dress rehearsal and applying them to the 2000 Census. For example, in discussing the challenges to motivating the public to respond to the census, we also noted that an augmented advertising campaign is planned for the 2000 Census and that mail response rates for the actual census tend to be higher than response rates obtained for a dress rehearsal. Regarding staffing, we noted that overall, the Bureau met its dress rehearsal staffing goals, but that the Bureau encountered pockets of problems in areas with especially difficult labor markets. The draft noted that due to the number of staff the Bureau will need to hire in 2000 and the historically tight labor market, the Bureau faces a substantial challenge—a view consistent with the Bureau’s. The Bureau’s final comment concerned our recommendation calling on the Bureau to develop a contingency plan of actions it can take if it receives a lower-than-expected mail response rate to the census questionnaire. The Bureau noted that its paramount objective in the months remaining before Census Day is to implement the procedures and operations that have already been planned. The Bureau commented that the only serious contingency would be to request a supplemental appropriation. We agree that the Bureau needs to concentrate on successfully implementing the procedures and operations already planned for the 2000 Census and, as our draft noted, a lower-than-expected mail response has major cost implications. Nevertheless, prudent management and past history suggest that developing a reasonable contingency plan is an appropriate course of action. Additional funding will not by itself make up for a lower-than-expected mail response rate. It is questionable whether the additional enumerators that the Bureau will need to complete the resulting increase in the nonresponse follow-up workload will be available given the staffing challenges described in our draft. By considering scheduling, data quality, staffing, and other logistical implications of a lower-than-expected mail response now—while time is still available—the Bureau could be better prepared to maintain the accuracy of census data. We are sending copies of this report to the Honorable William M. Daley, Secretary of Commerce, and the Honorable Kenneth Prewitt, Director of the Bureau of the Census. Copies will be made available to others on request. This report was prepared under the direction of J. Christopher Mihm, Associate Director, Federal Management and Workforce Issues. Please contact Mr. Mihm on (202) 512-8676 if you have any questions. Key contributors to this report are included in appendix II. In addition to those named above, Victoria Miller O’Dea, Victoria E. Miller, Lynn M. Wasielewski, Anne K. Rhodes-Kline, James M. Rebbe, Scott McNulty and Cindy S. Brown Barnes made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on the Year 2000 census, focusing on: (1) the need to boost the declining level of public participation in the census; and (2) the Census Bureau's need to collect timely and accurate data from nonrespondents. GAO noted: (1) with less than 4 months remaining until Census Day, significant operational uncertainties continue to surround the Bureau's efforts to increase participation in the census and to collect timely and accurate field data from nonrespondents; (2) key to a successful census is the level of public participation, as measured by the questionnaire mail response rate; (3) however, the response rate has been declining since 1970, in part because of various demographic and attitudinal factors, such as more complex housing arrangements and public mistrust of government; (4) based on the 1998 dress rehearsal for the 2000 Census, the Bureau estimates a 61-percent mail response rate in 2000; (5) however, this goal may be optimistic because: (a) a key ingredient of the dress rehearsal mail response rate - a second "replacement" questionnaire - will not be used in 2000 because the Bureau is concerned that the questionnaire could confuse recipients, which could lead to duplicate responses, and (b) while the Bureau has instituted an extensive outreach and promotion effort to help it achieve its desired response rate, dress rehearsal results suggest the Bureau still has not resolved the long-standing challenge of motivating public participation in the census; (6) the Bureau's ability to complete its field operations on time without compromising data quality is another significant risk to a successful census; (7) past experience has shown that following up on nonresponding households is one of the most error-prone and costly of all census-taking activities, requiring the Bureau to fill about 860,000 positions and recruit up to 3.5 million people; (8) even if the Bureau achieves its 61-percent mail response rate objective, it will have a nonresponse follow-up workload of 46 million housing units; (9) to complete this workload in the 10-week time frame that the Bureau has allocated, it will need to close an average of 657,000 cases every day; (10) however, a lower-than-expected mail response rate, difficulties in recruiting a sufficient number of workers in a tight labor market, and a variety of other factors, could undermine the Bureau's efforts and result in higher costs and less accurate data; and (11) while the Bureau has established post-census coverage improvement procedures to improve the accuracy of the 2000 Census data, these procedures are similar to 1990 methods that had limited success.
|
The United States provides military equipment and training to partner countries through a variety of security cooperation and assistance programs authorized under Title 22 and Title 10 of the U.S. Code as well as various public laws. When foreign partners choose to use the FMS program, they pay the U.S. Government to administer the acquisition of materiel and services on their behalf. The United States also provides grants to some foreign partners through the Foreign Military Financing (FMF) program to fund the partner’s purchase of materiel and services through the process used for FMS. DOD administers a number of security cooperation programs that focus on building partner capacity with appropriated funds. The Afghanistan Security Forces Fund and the authority to build the capacity of foreign security forces are examples of such security cooperation programs. The security assistance services provided through these programs use the same workforce to manage and acquire military equipment and services as the FMS program and are referred to as pseudo-FMS. Both FMS and pseudo-FMS program administrative costs are funded through FMS case surcharges that are administered through the FMS Trust Fund. Figure 1 shows an F-15 Eagle fighter, which is an example of an item that has been procured under FMS. DSCA administers all FMS and pseudo-FMS cases and works with various implementing agencies to execute them. DSCA’s workforce establishes security assistance procedures and systems and provides training, oversight, and guidance. DSCA’s workforce also implements a small number of cases. The workforces of the implementing agencies and their components are responsible for preparing, processing, and executing security assistance agreements. This includes working with foreign partners to determine requirements and managing cases. Fourteen agencies and DOD components act as implementing agencies, including the three military services—the Army, the Air Force, and the Navy—which manage the vast majority of FMS and pseudo-FMS cases. Each service has a designated component that leads and coordinates the development and implementation of FMS and pseudo-FMS cases: the Deputy Assistant Secretary of the Army for Defense Exports and Cooperation, the Deputy Under Secretary of the Air Force for International Affairs, and the Navy International Programs Office. While the many steps of the process used for FMS and pseudo-FMS cases can be grouped in different ways, they fall into five general phases: assistance request, agreement development, acquisition, delivery, and case closure. FMS and pseudo-FMS transactions follow the same five- phase process, but the roles, responsibilities, and actors involved can differ. For example, in the assistance request phase for FMS cases, the partner country identifies its needs and drafts a letter of request. For a pseudo-FMS case, the U.S. combatant commands and in-country security cooperation organizations identify needs and draft the request. In the agreement development phase, for an FMS case, the implementing agency, with input from the partner country, develops an assistance agreement called a Letter of Offer and Acceptance (LOA). For a pseudo- FMS case, the implementing agency prepares the assistance agreement. See figure 2 for the differences in each phase under the two programs. DSCA uses workload data obtained from two DOD systems to determine each service’s future year funding for FMS and pseudo-FMS program administrative costs. In fiscal year 2016, the three services began reporting to DSCA on seven quantifiable workload measures that together capture the workload required to implement FMS and pseudo-FMS cases. According to DSCA officials, these seven workload measures are used in a workload model to generate a funding target for each military department, which DSCA allocates. Based on the funding allocation, the military departments then determine the FMS administrative surcharge workforce levels for each component that processes FMS cases. According to DSCA officials, before developing these seven measures, workload was self-reported to DSCA by the military departments based on an estimation of their respective FMS sales and other factors, such as undelivered value. The implementing agencies use the Defense Security Assistance Management System (DSAMS) to write their FMS and pseudo-FMS cases. However, each military department also uses several systems, specific to each military department, to manage their execution of these programs. Since 2009, we have reported that DSCA has been working to replace these various military department-specific legacy data systems with the Security Cooperation Enterprise Solution (SCES). With SCES, DSCA aims to improve its communications with the services and to increase the efficiency of security cooperation programs. According to DSCA, once deployed, SCES will serve as the primary requisition system for DSCA and the three services. DSCA has established three measures of performance relating to FMS case duration but has not met its goals for two of these metrics and does not collect information on the third. The first metric tracks the time taken from the receipt of a partner country’s request to the transmission of a completed LOA to the partner country for approval. DOD’s timeliness has improved, but it is not meeting this metric’s goal of 85 percent of LOAs sent to partner countries within established time frames. DOD tracks performance and establishes goals based in part on the complexity of the cases. Specifically, simple cases that involve routine or repeat purchases of the same item, such as spare parts, training, and technical support, have an anticipated offer date (AOD) goal of 45 days. Standard cases that include purchases by experienced users of FMS, such as a purchase of a Blackhawk helicopter with all associated equipment and services, have an AOD goal of 100 days. Complex cases involve factors that are expected to substantially impact the time taken to complete the LOA or involve significant modifications and, therefore, have an AOD goal of 150 days. For example, the sale of new F-35A Joint Strike Fighter Conventional Take Off and Landing aircraft, which includes spares, support equipment, technical orders, contractor services, program management, software support, and training, would be categorized as complex. Table 1 shows the percentage of FMS cases meeting the timeliness goal for each type of LOA. In addition, appendix II provides information on how FMS time frames compare with pseudo-FMS time frames. The second metric is the time taken for the review of FMS cases as they are processed through DSCA headquarters. DSCA established this performance measure in late 2013, with a goal of 1 day. However, we found that, based on data provided by DSCA, it is not meeting this 1-day headquarters review goal, and its performance with respect to this goal has declined over time. The DSCA-provided data show that in fiscal year 2016, the average review time was approximately 1.97 days, up from 1.47 days in 2014. DSCA officials said that they would revisit the goal if the average approval time were to exceed 2 days. Table 2 shows the average time DSCA has taken to approve LOAs from fiscal year 2014 to 2016. DSCA officials cited various factors adversely affecting the timeliness of FMS cases, including shifting partner country requirements and delays due to policy or financial reasons. For example, they reported that sales have been put on hold as the United States tried to influence human rights policies in some countries. In addition, DSCA officials said that when pseudo-FMS cases, such as those to build partner capacity, are prioritized due to the possibility that the availability of appropriated funds may expire, traditional FMS cases are delayed as the workforce shifts priorities. DSCA could not identify the amount of time these factors can add to the FMS process. While DSCA hosts meetings with the military departments periodically to review data, the evidence DSCA provided to document the results of these meetings did not include an analysis to identify the underlying causes of the failure to meet goals. DSCA officials said that, collectively, the information systems of the implementing agencies and DSCA could potentially be used to determine the amount of time that each factor costs in the process but that doing so would be time-consuming and difficult. However, because they have not conducted this analysis, DSCA officials could not substantiate the relative importance of the various factors affecting the timeliness of the FMS process. Federal internal control standards state that management should establish activities to monitor performance measures and indicators. These may include comparisons and assessments relating different sets of data to one another so that analyses of the relationships can be made and appropriate actions taken. These standards also require that management conduct reviews at the functional or activity level to compare actual performance to planned or expected results throughout the organization and analyze significant differences. Without such an analysis, it is uncertain whether DSCA will be able to accurately identify the underlying root causes for the weak performance on timeliness for FMS cases so that it can identify and implement effective corrective measures and to analyze lessons learned and apply these across the security cooperation enterprise. The third metric is the time taken for DOD to deliver the first item or service on an FMS case to the recipient country. DSCA established this metric in August 2013, noting that the quick delivery of the initial equipment and services is most important to U.S. partners. DSCA established a goal that 50 percent of all LOAs for a given purchaser country deliver the first article, service, or training within 180 days. This goal recognized that a number of complex systems cannot be produced within 180 days, but the majority of cases do not involve complex acquisitions. DSCA officials stated that they have not collected data on this delivery measure, although some of it resides within the individual departments; thus, DOD’s performance on this metric cannot be assessed. Federal standards for internal controls state that management should use quality information to achieve the entity’s objective, including obtaining relevant data. Until DSCA collects information on this metric, it will not be able to determine whether it is achieving its goal. During fiscal year 2009 through 2016, the military departments’ FMS workload and workforce generally increased; over the same time period, DSCA, which does not have workload measures for its FMS workforce as a whole, experienced a decrease in its workforce. Although DSCA is required to develop a strategic workforce plan, and has plans to develop one, it has not yet done so. Fiscal year 2009 through 2016 data show that, for each military department, the overall workload to process FMS and pseudo-FMS cases generally increased, as did their overall workforces. DSCA and Army officials stated that, while no one measure can provide a full picture of the military departments’ workloads, implementing agencies use case line data to measure workload trends because those data capture the amount of work needed to procure each item in an FMS or pseudo-FMS case. However, DSCA officials also noted that because the military departments do not present case line data uniformly, the data cannot be compared to one another. For example, while the Army’s logistics system requires each item to be requisitioned separately, and typically identifies each component in a weapons system as a separate case line, the other military departments do not. According to DSCA, workload trends can be affected by several factors, including partner country budgets, exchange rates, prices of staple goods, import and export restrictions, and regional instability. As a result, we used case line data to show workload trends for each of the services separately, along with their individual workforce trends in full-time equivalents (see tables 3, 4, and 5). The upward trend in case line data presented in each of these tables is generally consistent with other workload measures, which are shown in appendix III. The data indicate that, from fiscal year 2009 through 2016, the Army’s workload increased by about 47 percent, and its actual workforce increased by about 42 percent (see table 3). The data also show that, during the same time period, the Air Force’s workload increased by about 48 percent, while its actual workforce increased by about 45 percent (see table 4). Finally, the data show that the Navy’s workload during fiscal year 2009 through 2016 increased by about 38 percent, while its actual workforce increased by about 45 percent (see table 5). In contrast to the upward trend in the workforces of the three military departments, DSCA maintained a relatively stable workforce from 2009 through 2014 but then experienced a decrease in the workforce since 2014. DSCA officials attributed this decrease to the transfer of some responsibilities and staff to another DOD component. In percentage terms, from fiscal year 2009 through 2016, DSCA’s FMS authorized workforce dropped by 3 percent, while its actual FMS workforce decreased by 19 percent. See table 6 for DSCA’s authorized and actual workforce that directly processes FMS and pseudo-FMS. DSCA officials said that the drop in personnel has not adversely affected their capacity to process FMS and pseudo-FMS. However, they expressed concern that a continued drop—combined with continued increases in the workload— could adversely affect DSCA’s capacity to review, coordinate, and perform financial management for FMS and pseudo-FMS cases. DSCA has not developed a workforce plan, although it is required to do so. A DOD June 2016 requirement related to strategic human capital planning applies to DOD components, such as DSCA, and calls for the component heads to develop, manage, execute, and assess their component’s strategic workforce plans, including manpower allocations and resources. Under this requirement, components are expected to also establish a methodology to assess the current state of their respective workforces, identify skill and competency gaps and strengths, and forecast emerging and future workforce requirements to support DOD’s mission. In October 2014, DSCA released a 6-year strategic plan to improve how security cooperation programs are implemented, taking into account the complexity of DOD’s security cooperation workforce. This plan, called “Vision 2020,” was updated in October 2015 and October 2016. As part of the Vision 2020 update, DSCA announced that it would address issues involving DSCA’s headquarters workforce in a human capital strategic plan. DSCA reported in October 2016 that it planned to complete the new human capital strategic workforce plan by October 2017 and to publish it separately from the Vision 2020 strategic plan. However, according to DSCA officials, as of May 2017, DSCA had not yet begun developing its plan. DSCA officials stated that the plan would probably not be ready by the planned issue date. In commenting on a draft of this report, DSCA stated that it was working to obtain the contractor support needed to develop a human capital strategic plan. DSCA also stated that it planned to obtain the contractor support by the end of fiscal year 2017 and to complete the preparation of a human capital strategic plan within eight months of having obtained the contractor support. Moreover, DSCA stated that it was in the process of updating various human capital-related instructions and policies. In addition, as discussed previously, DSCA has not met its performance time frames for reviewing FMS and pseudo-FMS. While DSCA collects FMS workload data from the military departments, it does not have workload measures for its own FMS workforce as a whole. In commenting on a draft of this report, DSCA stated that it has developed workload models and measures to help identify specific needs and provided evidence for two such models, but it did not state that it has workload measures for its FMS workforce as a whole. DSCA officials also stated that the nature of their work, which, among other things, involves overseeing the processing of cases, preparing policy memos and congressional notifications, and managing the FMS Trust Fund, makes it difficult to collect and quantify appropriate workload measures for their workforce. However, without such key workload data, DSCA cannot determine the cause for the decrease in timeliness as measured by their metric and whether a declining FMS workforce is contributing to it. In May 2017, DSCA officials stated that they did not intend to include a workload measure in their forthcoming human capital strategic plan. According to Vision 2020, the human capital strategic plan, when complete, is intended to enable DSCA to align its human capital to support the goals of the agency’s strategic plan. Since 2001, we have developed a significant body of work related to strategic workforce planning. The work stresses the importance of strategic workforce planning that includes, among other things, aligning an organization’s human capital program with its current and emerging mission and programmatic goals. Workforce planning that is linked to an agency’s strategic goals is one of the tools agencies can use to systematically identify the workforce needed for the future and to develop strategies for shaping this workforce. One tool for ensuring that an agency’s workforce is aligned with its current and emerging mission and program goals is the use of appropriate workload measures, particularly quantifiable workload measures, since these enable the agency to analyze and assess workload trends with more precision. Because DSCA does not have workload measures for its FMS workforce as a whole, it cannot be certain if its forthcoming strategic workforce plan will be aligned with current and emerging FMS mission requirements. DOD has taken some steps to address long-standing concerns about the timeliness of FMS delivery. GAO, DOD’s Inspector General, and others have made numerous prior recommendations to improve the FMS process. DOD has taken steps to address three of the recommendations GAO made in 2012 but has yet to implement the fourth—the establishment of a performance measure to assess timeliness for the acquisition phase of the security assistance process. Similarly, DOD has implemented most of the recommendations made by the DOD Inspector General. Furthermore, DOD has taken steps to address recommendations made by its Security Cooperation Reform Task Force in 2011 and 2012, but further steps are needed. In a 2012 report, GAO made four recommendations to improve the FMS process. In 2012, we recommended that to improve the ability to measure the timeliness and efficiency of the security assistance process, the Secretary of Defense should establish performance measures to assess timeliness for the acquisition phase, the delivery phase, and the case closure phase of the security assistance process. To improve the ability of officials responsible for security cooperation to obtain information on the acquisition and delivery status of assistance agreements, we also recommended that the Secretary of Defense establish procedures to help ensure that DOD agencies are populating security assistance information systems with complete data. DOD has not established a performance measure to assess timeliness for the acquisition phase of the security assistance process. According to DSCA officials, they do not own the information systems or databases that have the data necessary to measure the timeliness of the acquisition phase. However, DOD has taken steps to respond to the other three recommendations. Specifically, DSCA updated the Security Assistance Management Manual (SAMM) in August 2013 to include a metric for the delivery phase of standard requests. However, as discussed earlier in this report, information provided by DSCA did not include timeliness data for the delivery phase of the process, and DSCA officials reported that they are not collecting data for this performance metric. In addition, DSCA reported that it developed a tool within the Security Cooperation Management Suite to capture case closure data from implementing agencies. As of September 2015, DSCA reported that the tool now included metrics and measurements for the “aging” of cases within the different milestones and allowed for analysis of closure times based on the type and relative complexity of categories or cases. Finally, in response to our fourth recommendation, DSCA reported taking a number of actions. In early May 2014, it finalized programming the Enhanced Freight Tracking System (EFTS) that allows a daily upload of available data for FMS and pseudo-FMS materiel. DSCA reported that by mid-May 2014, EFTS had in-transit visibility over 75 percent of shipments. In addition, DOD reported the establishment of an electronic link between two of its information systems, which should improve the ability to share contract information. According to DSCA officials, the Security Cooperation Enterprise Solution (SCES) was proposed as a way to solve the problems stemming from the older, unique information systems maintained by each of the military services, which contribute to DSCA’s inability to facilitate data collection and analysis across the military departments. In 2012, DSCA officials told us that they would begin piloting the new system in 2015 and that SCES would be fully implemented by 2020. According to DSCA officials, the deployment schedule for SCES is behind schedule and is being revised. The pilot SCES deployment began on June 6, 2016. DSCA is working to convert data currently in legacy systems and, once a sufficient number of cases have been successfully executed in the pilot phase, limited deployment will occur. As of May 2017, DSCA officials could not provide a date for when this will happen. Three reports by the DOD Inspector General, issued between 2009 and 2013, contained 10 recommendations related to FMS, 8 of which DOD implemented. Examples of actions DOD has taken include the following: A 2013 audit found that an FMS contract involved an unallowable markup and made five recommendations to improve contracting quality assurance procedures. For example, the Inspector General recommended that the Deputy Assistant Secretary for Contracting, Office of the Assistant Secretary of the Air Force for Acquisition, assess practices for negotiating contracts and establish quality assurance procedures for contracting officers. In response, the Air Force established additional levels of oversight for contracting personnel and planned to create and implement additional training. A 2010 audit found that, although DSCA ensured that funds appropriated for assistance to Afghanistan and Iraq that it processed through the FMS network were used for their intended purpose and were properly reported, improvements were needed to ensure effective management of appropriated funds. Three recommendations were made. For example, the Inspector General recommended that the Director of DSCA perform a review of appropriated funds that have expired to return excess funds to the original fund holders. DSCA agreed to review excess appropriated funds that expired in previous fiscal years and to return unneeded funds. In addition, a 2009 audit evaluated the cash management of the FMS Trust Fund, determined whether internal control was adequate, and reviewed the management control program in place for the FMS Trust Fund. Two recommendations were made, including that DSCA discontinue transferring funds appropriated for the Afghanistan Security Forces Fund and Iraq Security Forces Fund to the Foreign Military Sales Trust Fund, and discontinue the use of administrative fee surcharges for certain transactions. However, DSCA did not concur with either recommendation and has not taken action to implement them. At the request of the Secretary of Defense, DOD created the Security Cooperation Reform Task Force (Task Force) in 2010 to study ways to improve security cooperation and security assistance programs, including FMS. The Task Force produced two reports. In its first report, the Task Force made more than 50 recommendations addressed to the Secretary of Defense for improving security cooperation processes. The second report provided information on the status of the recommendations. Our review of these two reports identified 17 recommendations that the Task Force addressed to DSCA. We found that DOD had implemented one recommendation and partially implemented 16 recommendations, as summarized in table 7. One of the Security Cooperation Reform Task Force recommendations was that DOD “maintain an inventory of high-demand and long lead-time items via the Special Defense Acquisition Fund (SDAF).” The SDAF is a revolving fund that allows DOD and State to purchase select types of defense equipment and services in anticipation of partner countries’ future FMS needs. The fund reduces the amount of time it takes the United States to provide some items and enhances U.S. readiness by reducing the need to divert assets to meet urgent partner needs. The best candidates for purchase through SDAF are items that take a long time to purchase, make, and deliver. According to documents provided by DSCA, the use of the SDAF over the last 5 years has facilitated the sale of about $584 million in procurements to purchase equipment for about 45 countries worldwide. For example, the DSCA response shows that the SDAF has been used to purchase a stock of night vision devices that typically have procurement lead-times of more than 18 months. DSCA reports that this has allowed the United States to transfer the devices more quickly to meet the urgent needs of partner countries, including Afghanistan and Iraq. Overall, SDAF has cut FMS procurement lead- times for key equipment by 6 months or more, according to the documents provided by DSCA. DSCA officials provided some evidence that actions had been taken to address the remaining 16 recommendations. We consider these recommendations as partially implemented because the evidence indicated that not all aspects of the recommendation were addressed. For some of these recommendations, DSCA officials stated that the recommended measures, or similar measures, would be undertaken as part of the reforms mandated by the fiscal year 2017 NDAA. For example, one of the task force’s recommendations was for DSCA to establish and deploy Expeditionary Requirements Generation Teams (ERGT) to partner countries. The task force recommended the ERGTs to provide rapid support to partner countries and the U.S. country teams in developing high-quality, precise requirements for security cooperation cases. According to DSCA officials, the expeditionary teams were popular with partner country officials who were relatively inexperienced with the FMS process. DSCA officials said that the expeditionary teams have been used only three times because forming and deploying the teams turned out to be expensive and disruptive to the processing of other FMS cases. DSCA was unable to provide documentation for the number of times and for which countries ERGTs were used, what results were obtained from using ERGTs, or a formal determination of the effectiveness of using the ERGTs. Another recommendation called for DSCA to “update the Security Assistance Management Manual (SAMM) and amend the Defense Federal Acquisition Regulation Supplement (DFARS)to direct that implementing agencies—in specific instances when sensitive or classified materials are being transported—use a clear, comprehensive “pre-case transportation assessment” document for assessing transportation and distribution requirements following receipt of a Letter of Request and before issuance of an LOA.” According to DSCA officials, the change to the SAMM was in the final stages of being approved as of May 2017. The DSCA response to our request stated that, contrary to the recommendation, officials were not considering an amendment to DFARS for this purpose. Foreign Military Sales totaled about $300 billion between fiscal year 2009 and 2016. In that time, the FMS workforce and workloads of the three military departments have grown significantly, while the DSCA workforce has decreased. Since 2009, DOD has implemented a number of reforms designed to improve its capacity to deliver FMS assistance in a timely manner. However, although performance for the program has improved, two of the performance measures set for the program are generally not being met. DSCA has not sufficiently analyzed the reasons for not meeting these goals. For the third metric established to monitor the timeliness of the delivery phase, DSCA is not collecting data and therefore does not know how it is performing against this goal. Without a comprehensive analysis of the entire FMS process facilitated by the collection of data, DOD is unable to identify the reasons it is not meeting its performance goals and to target efforts to address those reasons. Further, as part of its Vision 2020 strategy, DSCA reported that it would develop a strategic workforce plan by October 2017 but as of July 2017, DSCA had not yet begun developing the plan. Finally, DSCA lacks workload measures for its FMS workforce as a whole and, without such key data, cannot be certain that its workforce plan, when complete, will meet current and emerging program requirements. We are making the following four recommendations to DOD: The Acting Director of DSCA should take steps to ensure the collection of data measuring the timeliness of the delivery of equipment and services to recipient countries. (Recommendation 1) The Acting Director of DSCA should analyze data on all performance metrics to better identify deficiencies. (Recommendation 2) The Acting Director of DSCA should develop a workforce plan. (Recommendation 3) The Acting Director of DSCA should develop workload measures for its FMS workforce. (Recommendation 4) We provided a draft of this report to DOD and State for their review and comment. DOD’s written comments are reproduced in appendix IV, and its technical comments were incorporated as appropriate. State did not provide comments. In its comments, DOD partially concurred with our first and second recommendations, concurred with our third recommendation, and did not concur with our fourth recommendation. In partially concurring with our first recommendation, DOD stated that it intends to rescind or replace, in the near future, the metric established to measure the time DOD takes to deliver the first items to recipient countries. DOD commented that collecting data on when the first spare part or support equipment is delivered does not provide meaningful data. DOD also stated that DSCA will work with the implementing agencies to establish a metric that will be useful in tracking the delivery of defense articles and services to recipient countries. Based on these comments and following discussions with DSCA officials, we revised the recommendation to clarify the steps DSCA should take. In partially concurring with our second recommendation, DOD stated that it will continue to gather and analyze data on performance metrics for which it has established timelines and where the data are available in security assistance or cooperation data systems. While we agree that these actions are useful for DOD to oversee the execution of security assistance, we continue to believe that DOD needs to improve its analysis of performance data in order to identify the root causes of any delays and determine the steps needed to improve the timeliness of the process. In concurring with our third recommendation to develop a workforce plan for DSCA, DOD stated that DSCA is working with DOD’s Washington Headquarters Services on workforce planning. DOD also stated that DSCA is working to determine hard-to-fill positions and lay out a plan for filling such positions and identifying gaps caused by attrition. In addition, DOD stated that it planned to obtain contractor support by the end of 2017 in order to develop a human capital strategic plan and planned to publish the strategy within 8 months of obtaining contractor support. In disagreeing with our fourth recommendation, DOD noted that there are not enough measureable requirements within a headquarters activity to provide meaningful workload determinations, that the workload at headquarters is independent of FMS volume, and that the broad responsibilities across the agency have little relevance from one area to another. We have clarified the report to reflect that agencies can develop more than one workload measure and to more clearly refer to the development of appropriate workload measures for the agency’s FMS workforce. We also clarified our recommendation to reflect that agencies can have more than one workload measure. However, while we recognize that the work performed by some organizations may be challenging to measure, we continue to believe that a reliable measure of workload is integral to effective workforce planning. We are sending copies of this report to the Secretaries of Defense and State and appropriate congressional committees. In addition, the report is available at no charge on our website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. House Report numbers 114-154 and 114-537 include provisions for GAO to assess the Foreign Military Sales (FMS) process. This report assesses (1) the extent to which the Department of Defense (DOD) has met performance goals with respect to the timeliness of the FMS process, (2) DOD’s FMS workforce planning efforts and fiscal year 2009 through 2016 FMS workload and workforce trends, and (3) the actions DOD has taken to address recommendations made since 2009 to improve FMS. In addition, appendix II provides information about how the timeframes for processing FMS compare with the timeframes for processing certain security cooperation cases authorized under Title 10 of the U.S. Code and various public laws. To perform our assessment, we first identified the principal agencies and components that process FMS. Although 14 U.S. government agencies and DOD components process FMS cases, the Departments of the Army, Air Force, and Navy process 95 percent of all FMS cases. For this reason, our review focuses on the FMS cases processed by these military departments. Because the Defense Security Cooperation Agency (DSCA) also plays a key role in the FMS process, we also included DSCA in our assessment. We also collected data and met with officials of the Defense Logistics Agency and the Defense Contract Management Agency to better understand the role these agencies play in supporting the FMS process. In addition, we collected data from and met with officials from the Department of State, which is responsible for supervising and directing FMS. We reviewed established DSCA and military department performance goals and determined the extent to which those goals were being met, and, where applicable, the factors contributing to agencies not meeting those goals. We interviewed officials from State, DSCA, and the military departments and reviewed guidance in the Security Assistance Management Manual to identify existing performance goals across FMS case development and case execution and to identify the information DSCA and the military departments collect to assess their performance against established goals. We reviewed GAO’s Standards for Internal Control to assess the requirement for managers to compare actual performance data to planned or expected results. For all performance goals that we identified, we interviewed agency officials about how performance was measured according to established goals and collected data on how performance results were communicated throughout the security assistance community. To assess the extent to which established performance goals have been met, we reviewed and summarized performance data and interviewed officials from DSCA. We reviewed data on military department case development performance in terms of mean case development time and DSCA’s anticipated offer date standards for fiscal years 2010 through 2016. For the information presented in table 1, the performance statistics are based upon the following number of cases: We met with Navy and Air Force officials about systems for tracking timeliness of case execution. Where the team’s review of case performance data showed that agencies were not meeting established performance goals, we interviewed agency officials and reviewed data on cases whose processing times surpassed established goals to assess what factors affected case duration. We interviewed officials from DSCA, State, and the military departments to build a qualitative understanding of the factors that have historically affected FMS case development and execution times. To assess DOD’s fiscal year 2009 through 2016 workload and workforce trends, and workforce planning efforts, we obtained fiscal year 2009 through 2016 workload and workforce data from DSCA and the military departments. In referring to the FMS workforce, we refer to DOD officials who process FMS and whose salaries are paid for with funding from the FMS Administrative Surcharge Account and not DOD officials who help process FMS whose salaries are paid for with appropriated funds. We also do not include officials assigned to security cooperation organizations at U.S. embassies throughout the world. The FMS Administrative Surcharge Account is part of the FMS Trust Fund, which is used to collect, among other things, payments from foreign partners for purchases of equipment and services through the FMS system. For DSCA, we asked DSCA officials to provide us with data on the workforce that processes FMS and not the workforce that supports the FMS workforce by providing training and other services. We obtained both authorized and actual FMS workforce data for DSCA and the military departments, as well as some authorized and actual mission critical occupation data. The Army, Air Force, and Navy all use the same process and structure for both FMS and pseudo-FMS cases; for that reason, we collected fiscal year 2009 through 2016 authorized and actual pseudo-FMS workforce data. To address the extent to which DOD’s existing workforce plans address the FMS workforce, we reviewed DOD’s Fiscal Year 2010-2018 and Fiscal Year 2013-2018 strategic workforce plans. In addition, we reviewed DOD’s April 2010 Defense Acquisition Workforce Improvement Strategy, and its Fiscal Year 2016-2021 Acquisition Workforce Strategic Plan to determine the extent to which these plans specifically address the FMS workforce. We also reviewed various military department strategic plans, including the Air Force’s 2010 to 2018 strategic workforce plan, to examine the extent to which they address the FMS workforce. In addition, we reviewed DSCA’s “Vision 2020” strategic plan. We interviewed appropriate officials from DSCA and the military departments. To assess the actions taken by DOD to address recommendations made since 2009 to improve FMS processing, we conducted searches for and queried relevant DOD officials about audits, studies or reports making recommendations or suggesting reforms to improve the FMS process since 2009. We identified and reviewed a total of two prior reports by GAO, one report by the State Inspector General, and three reports by the DOD Office of the Inspector General, as well as two reports by DOD’s Security Cooperation Reform Task Force concerning aspects of the Foreign Military Sales program. To determine the extent to which DOD implemented the recommendations we requested documents providing the details of the implementation including the number of times the recommendation or reform was implemented, the results, and any analysis of the results. We also interviewed DOD officials to ask them about the recommendations and what was done by way of implementation to determine the extent to which they had implemented the recommendations. The ratings we used in this analysis are as follows: “Not Implemented” means DOD provided no evidence that the recommended actions were taken. “Partially Implemented” means that DOD provided evidence that some portion of the recommended actions was taken. This includes recommendations for which DOD provided only testimonial evidence that the recommendation had been implemented. “Implemented” means that DOD provided evidence that the recommended actions were taken such as changes in policy, the collection and use of data, records of transactions, results of initiatives conducted, or records of reforms implemented. We discussed the recommendations contained in these reports with appropriate DSCA officials. In addition, we reviewed a memo discussing the status of DSCA’s Security Cooperation Enterprise Solution and met with DSCA officials to discuss the status of the system. We collected the data used in our analyses from a number of DOD systems. The data used to assess the extent to which DOD has met performance goals with respect to the timeliness of the FMS process were collected from DSCA’s Defense Security Assistance Management System (DSAMS), and the Army’s Centralized Integrated System- Integrated Logistics system. The data used to assess workload and workforce trends were collected from DSCA’s DSAMS, and DSCA’s Business Objects Enterprise Reporting System and BeSMART systems, as well as the Air Force’s Security Assistance Manpower Requirements system, and DOD’s Defense Civilian Personnel Data System. To assess the reliability of the performance, workload, and workforce data collected, we reviewed existing information about the data and the systems that produced them. We also interviewed agency officials knowledgeable about the data and the systems that produced the data using a standard set of questions. We found that the data provided to us were generally reliable for purposes of our analysis, but that there were also some limitations in the use of the data. For example, the military departments differ in the definition of a “case line,” which makes it impossible to compare case line workload data by military department. We discuss these limitations, as appropriate, in the main part of the report. We conducted this performance audit from May 2016 through August 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The United States provides military equipment and training to partner countries through a variety of programs. Foreign partners may pay the U.S. government to administer the acquisition of materiel and services on their behalf through the FMS program. The United States also provides grants to some foreign partners through the Foreign Military Financing (FMF) program to fund the partner’s purchase of materiel and services through the process used for FMS. In recent years, Congress has expanded the number of security cooperation programs to include several new programs with funds appropriated to the Department of Defense (DOD), as well as administered and implemented by DOD, that focus on building partner capacity. In this report, we refer to these programs as “pseudo-FMS” cases. FMS and pseudo-FMS transactions follow the same process, but the roles, responsibilities, and actors involved can differ. One important difference highlighted by DOD and Department of State (State) officials is that with FMS, there is a much greater level of involvement on the part of the partner country in defining requirements and developing the Letters of Offer and Acceptance (LOA). As a result, the amount of time it takes to develop FMS cases on average will tend to exceed the time it takes for pseudo-FMS cases. According to DOD and State officials, there may also be differences in the types of equipment that tend to be provided via FMS as opposed to pseudo-FMS cases. For example, pseudo-FMS is not typically used to provide complex weapons systems with long production cycles such as advanced fighter aircraft. According to DOD and State officials, pseudo-FMS cases are often prioritized because the funds used for these programs generally are only available for obligation for 1 or 2 years, depending on the program. These officials note that funds for traditional FMF programs do not have such time constraints. As a result, pseudo-FMS cases are, on average, processed faster than FMS cases. Army and Air Force officials noted that pseudo-FMS cases tend to be more labor intensive than FMS cases for several reasons. For example, according to Air Force officials, pseudo- FMS cases often involve items that frequently require a new contract because the item is not part of the Air Force inventory. For that reason, Air Force officials noted that they cannot modify an existing contract to add additional items. Army officials said that pseudo-FMS cases require more work because of the nature of expiring funds. This requires an acceleration of almost all their processes. Figure 3 shows the average number of days it took to complete the case development phase, which is measured by the processing time from “Letter of Request Receipt” to “Document sent to purchaser.” Tables 9, 10, and 11 present Army, Air Force, and Navy workload data using six of the seven measures. It shows that the FMS and pseudo-FMS workload of each of the services generally increased from fiscal years 2009 through 2016. The Defense Security Cooperation Agency (DSCA) did not provide a breakout of official sales data by military department but did provide these data in the aggregate. These data are seen in Table 12. In addition to the contact named above, Hynek Kalkus (Assistant Director), Jeff Phillips (Assistant Director), Claude Adrien (Analyst-in- Charge), Wesley Collins, Lynn Cothern, Jessica Mausner, and Jose M. Pena III made significant contributions to this report. Ashley Alley, Martin de Alteriis, and Mark Dowling provided technical assistance.
|
U.S. national security benefits from the timely provision of military equipment and services that enable foreign partners and allies to build or enhance their security capability. State has overall responsibility for the FMS program, while DOD administers the program through DSCA and implementing agencies in the military departments. Since 2009, DSCA has taken steps to improve the timeliness of the FMS process, but concerns remain that the delivery of FMS equipment is not timely, leaving foreign partners waiting for items needed to achieve security objectives. House and Senate committees requested that GAO assess the FMS process. This report assesses (1) the extent to which DOD has met FMS timeliness goals, (2) FMS workload and workforce trends, and (3) actions DOD has taken to address recommendations to improve the FMS process made by GAO and others. GAO analyzed performance data for FMS from 2012 to 2016; workforce and workload data from the military departments; reviewed relevant DOD regulations and policies for FMS; and interviewed DOD officials. The Department of Defense's (DOD) performance on Foreign Military Sales (FMS) has improved, but DOD is not meeting two out of three performance metrics for the timely processing of FMS requests and does not collect data for the third metric. The first metric tracks the time taken from the receipt of a country's request for an item to when a Letter of Offer and Acceptance (LOA) is sent to the partner country for approval. As shown in the table, this metric is based on the complexity of the requests, and although DOD's timeliness has improved, it is still short of the 85 percent goal. The second missed metric is the time the Defense Security Cooperation Agency (DSCA) takes to review and approve FMS cases. The review time in 2016 was more than the 1 day goal. The third metric is the time DOD takes to deliver the first item to the recipient country; however, DSCA does not collect data on this metric and therefore does not know if it is meeting the goal. DOD officials cited several factors that adversely affect their ability to meet the timeliness goals, such as changing customer requirements or delays due to policy concerns regarding particular sales. However, because DOD has not collected data on one metric and has not identified the underlying causes for not meeting its goals, it does not know the extent to which these or other factors are impacting program delivery. During fiscal years 2009 through 2016, the FMS workload increased and while the three military services' FMS workforces generally increased, DSCA's FMS workforce decreased. DSCA officials do not believe the size of their workforce has impacted timeliness; but the data provided to GAO shows that DSCA's timeliness has decreased as the size of its FMS workforce has decreased. A key principle of strategic workforce planning is that an agency's workforce must be aligned with its workload. However, DSCA lacks workload measures for its FMS workforce as a whole, and therefore cannot ensure that its workforce is sufficient to meet programmatic goals. Moreover, despite a DOD requirement, DSCA has not yet developed a workforce plan that could help identify any skill or competency gaps in its workforce. Officials said they planned to do so by the end of May 2018. DOD has a taken some steps to address recommendations to improve the FMS process, but additional actions are still needed. For example, DOD implemented three of GAO's prior recommendations, such as establishing performance metrics, but has yet to establish a metric to assess timeliness of the acquisition phase. DOD has partially implemented several of the recommendations made by an internal DOD task force. For example, DOD has partially implemented the recommendations to enhance the skills of the FMS workforce. In addition, DSCA's efforts to standardize data that are maintained separately by the military services on a new information system have fallen behind schedule. GAO recommends that DSCA (1) collect data on delivery of items or services, (2) analyze FMS performance metric data to determine why goals have not been met, (3) develop a DSCA workforce plan, and (4) develop DSCA workload measures. DOD partially concurred with the first two recommendations, concurred with the third, and did not concur with the fourth. GAO continues to believe action is needed as discussed in the report.
|
The U.S.–Japan alliance dates back to the U.S. occupation of Japan after its defeat in World War II. The alliance is supported by the 1960 Treaty of Mutual Cooperation and Security and a related Status of Forces Agreement. As a result of the treaty, the Status of Forces Agreement, and related agreements, U.S. forces are able to use nearly 90 installations throughout mainland Japan and Okinawa for the purpose of contributing to the security of Japan and the maintenance of international peace and security in the region. One issue that remains at the forefront of the alliance is the realignment of U.S. forces in Japan. Efforts to realign U.S. forces in Japan date back to 1995. We have previously reported that discontent among the people of Okinawa regarding the U.S. military presence led to efforts in the 1990s to consolidate, realign, and reduce U.S. facilities and areas and adjust the operational procedures of U.S. forces in Okinawa to reduce the impact on local communities. However, as we had reported, realignment efforts did not make much progress until the end of 2002, when the United States and Japan launched a series of realignment initiatives called the Defense Policy Review Initiative (DPRI). Under DPRI, both countries were seeking to reduce the U.S. footprint in Okinawa, enhance interoperability and communication, and better position U.S. forces to respond to a changing security environment. The major realignment initiatives under DPRI were outlined in the U.S.–Japan Roadmap for Realignment Implementation (2006 Roadmap) and subsequently adjusted, most recently through a joint statement issued in April 2012. There are four initiatives under DPRI that are specific to the realignment of Marine Corps forces in the Pacific: 1. Constructing and moving forces to the Futenma Replacement Facility, 2. Relocating Marine Corps units from Okinawa to Guam, Hawaii, the continental United States, and Australia, 3. Consolidating installations on Okinawa, and 4. Moving Marines to Iwakuni. As envisioned by the 2006 Roadmap, the U.S. government would return to Japan the Marine Corps Air Station Futenma in Okinawa once the government of Japan constructed a fully operational replacement facility (Futenma Replacement Facility), including a runway, in a northern, less- populated area of the island. This facility was originally projected to be completed by 2014, but delays have slowed its progress. According to the officials, as of June 2016, 9 of 184 projects have been constructed at the planned site of the realignment—Camp Schwab. Figure 1 shows the planned location of the runway at Camp Schwab and how high landfill material must rise to build the runway. After several years of planning to move approximately 8,000 Marines from Okinawa to Guam, DOD revised its plan in April 2012 to, among other things, relocate 4,100 Marines to Guam, 2,700 to Hawaii, and 800 to the continental United States, as shown below in figure 2. Additionally, the plan includes establishing up to a 2,500-person rotational Marine Corps presence in Australia, 1,300 of whom would come from Okinawa— a move that, according to DOD officials, stems from a November 2011 announcement between the United States and Australia. DOD expects relocation to Guam to occur between fiscal years 2022 and 2026. To provide additional training opportunities for Pacific Command’s service components, DOD is planning to construct training ranges on the nearby Commonwealth of the Northern Mariana Islands (CNMI), specifically the islands of Tinian and Pagan. However, no forces are expected to relocate to CNMI. DOD estimates that the total cost to relocate Marines to Guam and training on CNMI will be $8.7 billion in fiscal year 2012 dollars, with approximately $3.1 billion being provided by Japan. DOD expects relocation to Hawaii to occur between 2027 and 2031. According to DOD documentation, its baseline rough order-of-magnitude cost estimates for development on Hawaii range from approximately $1.3 billion to $2.5 billion in fiscal year 2012 dollars, although actual costs will vary depending upon the mix of units and the facilities needed. For the relocation to the continental United States, the Marine Corps currently has no plans, time frames, or cost estimates. According to Marine Corps officials, the decision to relocate 800 Marines to the continental United States was made because there was a need to further reduce the Marine Corps presence on Okinawa. Additionally, senior officials at Marine Corps Headquarters and Marine Corps Pacific Command stated there was no strategic need to move the Marines to the continental United States, and they assume that this move may never happen—for example, they said that if the global Marine Corps presence continues to downsize, then perhaps the positions for the 800 Marines slated to move to the continental United States may be eliminated from the global Marine Corps presence. Additionally, in November 2011, the U.S. and Australian governments announced the intent to establish a rotational presence of up to a 2,500 person Marine Air-Ground Task Force in Darwin, Australia—1,300 of which would come from Okinawa, according to DOD. Rotations would occur from approximately April through September or October, during Australia’s dry season. To date the Marine Corps has held five 6-month rotations, ranging from a 200 Marine infantry company rotation in 2012 to a 1,250 Marine infantry battalion rotation in 2016. The April 2012 statement noted that the United States is committed to returning lands on Okinawa to Japan as designated Marine Corps forces are relocated and as facilities become available for units and other tenant activities relocating to other locations on Okinawa. Figure 3 depicts U.S. installations on Okinawa and identifies which installations have been designated to be partially or fully returned to Japan or are staying as part of the U.S. presence, according to the April 2012 statement. On the basis of the 2006 Roadmap, the Marine Corps would relocate its tanker aircraft and facilities from Marine Corps Air Station Futenma to Marine Corps Air Station Iwakuni, as well as develop a training capability at Kanoya Air Base. Additionally, a Navy carrier wing currently located at Naval Air Station Atsugi (about 35 miles southwest of Tokyo, Japan) would relocate to Marine Corps Air Station Iwakuni. The relocation to Iwakuni is expected to be completed in 2019, with the Marine Corps tanker aircraft unit having already relocated in 2014. Within DOD, several offices have roles in the relocation of Marines from Okinawa to Guam and Hawaii, the establishment of a rotational Marine presence in Australia, and the realignment of Marines within Okinawa and Iwakuni. These offices are located throughout the United States and Pacific Command’s area of responsibility. Figure 4 identifies DOD offices with roles and responsibilities related to the Asia-Pacific relocation, along with their locations. DOD has coordinated its efforts to relocate Marines from Okinawa by developing a high-level synchronization plan that combines the various programs related to relocating Marines from Okinawa and organizing various working groups to increase coordination among stakeholders. However, DOD officials have not fully resolved selected identified capability deficiencies associated with the planned relocation to Guam and Hawaii and establishment of a rotational presence in Australia. The Marine Corps has coordinated its efforts to relocate Marines from Okinawa by developing a high-level synchronization plan that combines the programs related to relocating Marines in one document. Headquarters Marine Corps officials described the synchronization plan as an overarching tool for simultaneously scheduling the various relocation initiatives and graphically depicting how these relocations are interconnected and affected by both unit movements and facilities construction. In June 2013, we reported that this synchronization plan was in development, with the goal of establishing the appropriate sequencing of events needed to complete all relocation initiatives. In January 2015, the Marine Corps completed the synchronization plan, which contains information pertaining to the Futenma Replacement Facility, Guam, the Joint Training Range Complex in CNMI, Hawaii, Australia, Okinawa consolidation, and Iwakuni. Subsequently, in June 2016 the Marine Corps updated the synchronization plan to incorporate its latest time frames. Figure 5 shows how major milestones and actions may interface with each other, up to 2030. In addition, DOD has coordinated relocation initiatives through organizing various working groups that bring together representatives from the respective stakeholders involved in the relocation efforts. For example, U.S. Forces–Japan participates in several working groups called Alliance Transformation Ad-Hoc Working Groups and subcommittees that address DPRI. One group works on Okinawa initiatives, which includes all topics related to Okinawa Consolidation and the Futenma Replacement Facility. Another group addresses progress in mainland Japan with Marine Corps Air Station Iwakuni and Kanoya Air Base. Pacific Command officials said they also participate in several working groups such as the Joint Facilities Working Group and the DPRI Planning Group. The officials stated that the Joint Facilities Working Group is led by Pacific Command and consists of the Office of the Secretary of Defense and representatives from each of the services, including Naval Facilities Engineering Command, and their Australian counterparts. They added that this group plans facilities and is working on resolving cost estimate differences for Australia. The DPRI Planning Group includes participants from Marine Corps offices including Marine Corps Plans, Policies and Operations; Marine Corps Installations Command; Marine Corps Forces Pacific; Marine Corps Activity Guam; and III Marine Expeditionary Force. The group is responsible for developing and submitting all requirements for the future Marine Corps Base Guam. DOD has not yet fully resolved selected identified capability deficiencies related to the relocation of Marines from Okinawa, which may cause units to be unprepared or not fully prepared for their missions. Specifically, DOD has not fully resolved the operational challenges related to moving Marine units to Guam; limited training facilities in Iwakuni, Hawaii, and CNMI; the runway length at the Futenma Replacement Facility; and challenges for operating in Australia. According to DOD’s Unified Facilities Criteria 2-100-01, in the context of developing installation master plans, mission requirements—which would include the capabilities needed to fulfill the mission—largely determine land and facility support requirements. This DOD guidance states that data on current and proposed mission requirements will be used to establish limitations and conditions that directly affect the installation’s ability to execute mission support. However, DOD began planning facility requirements before resolving selected identified capability deficiencies that can affect the missions of the relocating units, and it has not yet resolved needed capabilities for the Marine Corps units that will be relocated as part of the realignment in the Asia-Pacific region. DOD has not resolved operational challenges associated with the movement of Marine Corps units before beginning to develop facility requirements. Officials with III Marine Expeditionary Force stated that they began working on capability planning in January 2013, after being given the facilities plan for Guam. As a result of working on capability planning after facility planning, III Marine Expeditionary Force officials identified several capability concerns regarding the relocation. For example, III Marine Expeditionary Force officials stated they would like the Guam relocation to occur within an 18-month time frame to help ensure that forces move together based on capabilities. According to officials from III Marine Expeditionary Force, it makes more sense to move a maintenance battalion at the same time it moves the units the battalion supports rather than move that battalion based on facility completion dates; otherwise, the supported units would remain in Okinawa for some time without maintenance capability. Marine Corps and Pacific Command officials stated that, based on the capability concerns regarding the relocation expressed by III Marine Expeditionary Force, in the summers of 2015 and 2016 Marine Corps Forces Pacific conducted simulated wartime scenarios to assess these capability concerns. As a result of the simulated wartime scenarios, the Marine Corps and Pacific Command officials stated that some of III Marine Expeditionary Force’s concerns were validated and proposed solutions are currently being analyzed. However, the analysis on how to move forces has not yet been resolved, and the officials said that decisions need to be made about force structure and positioning of forces to affect facility planning adjustments. According to DOD’s Unified Facilities Criteria 2-100-01, mission requirements will be used to largely determine land and facility support requirements. Instead, DOD has focused on facility planning before capability planning. By considering options to resolve this capability deficiency, such as striking the balance between moving forces together based on capabilities with not leaving facilities vacant, DOD could help ensure that mission requirements are being met and are not hindered during the relocation. DOD has not fully resolved some identified Marine Corps training capability deficiencies in Iwakuni, Hawaii, and CNMI. As a result, it may take additional time, effort, and resources to resolve these deficiencies and it is uncertain whether the Marine Corps units will be able to complete necessary training in these locations. Iwakuni—DOD has not fully resolved training requirements needed for the Marine Corps units that relocated from Okinawa to Marine Corps Air Station Iwakuni. According to officials from U.S. Forces– Japan, there are no training locations near Iwakuni that are sufficient for relocated Marine Corps units’ training needs, resulting in the units returning to Okinawa for training and spending additional money for fuel and equipment maintenance. Kanoya Air Base is currently the only location that is being considered for training, but it is not sufficient for the relocated units’ needs because there are training requirements that cannot be satisfied at Kanoya Air Base, according to U.S. Forces–Japan and Marine Corps officials. DOD has formed a working group to consider training in mainland Japan for Iwakuni units, but planning has stalled because DOD has not identified other training areas. Although, according to officials from U.S. Forces–Japan, the government of Japan is ultimately generally responsible for building training locations, DOD’s identification of other training areas could be presented to the government of Japan to help resolve this issue, in particular given that DOD may ultimately be responsible for sustaining whatever training facility the government of Japan builds. DOD could also continue to raise the concern about the training deficiency in normal bilateral channels such as the Security Consultative Committee. With respect to training capacity, as indicated by Unified Facilities Criteria 2-100-01, DOD has identified limitations and conditions that affect the Iwakuni installation’s ability to execute mission support. However, it has not identified other training areas that would support mission requirements. Marine Corps officials stated that, as of October 2016, the bilateral arrangement with Japan was modified to allow for alternative training areas other than Kanoya Air Base. However, Marine Corps officials did not provide evidence that any further locations have been identified. In February 2017, officials from U.S. Forces–Japan said that bilateral consensus was reached on an agreement to establish a working group to study other possible locations beyond Kanoya for training. Without identifying training areas for its units based in Iwakuni, DOD risks having spent significant resources in expanding the Marine Corps Air Station Iwakuni while still spending additional time and money sending units back to Okinawa. Hawaii—DOD has not resolved the training needs of the approximately 2,700 additional Marines that are planned to relocate to Hawaii beginning in 2027. The addition of the Marines will likely cause additional strain on already stressed training ranges in Hawaii. As of April 2016, Marine Corps officials have not identified a timeline for when they plan to develop training plans, stating that planning for Hawaii is not yet a priority. However, citing a March 2014 Hawaiian islands training study, Marine Corps officials noted that installations in Hawaii lack sufficient range capabilities to fully support training of units already stationed there. Because the sites are not sufficient, the officials stated that about 90 percent of the Marine Corps training occurs on Army training ranges in Hawaii. However, there are capacity issues with those sites because the Marine Corps has to share the space with the Army. According to the March 2014 study, the limited ranges in Hawaii have historically been used at a close-to- capacity level. Furthermore, infrastructure planning takes years to complete in advance of allocating resources for particular needs in a budget. Without infrastructure planning to support mission requirements, as identified in the Unified Facilities Criteria 2-100-01, the Marine Corps risks not having the necessary infrastructure to fulfill its needed capabilities. It is important to resolve this capability deficiency now because these training issues will become exacerbated as additional Marines begin to relocate to Hawaii. CNMI—DOD has not fully resolved the training requirements in the region of CNMI, and may have to spend more time and resources to identify other, potentially more costly, locations for training. According to DOD’s study on training requirements in CNMI, there are 42 unfilled training requirements throughout Pacific Command’s area of responsibility. DOD officials stated that training ranges in CNMI would solve all of the unfulfilled live-fire and unit-level training deficiencies in the Asia-Pacific region. Pacific Command officials described the potential training capabilities in CNMI as a crucial initiative. However, as of the time of our review, the environmental impact statement recommending training ranges in CNMI has not been finalized, and instead it is being revised. The draft environmental impact statement received 27,000 comments expressing concerns about the plans regarding training facilities in CNMI. Many of these comments expressed concerns about potential impacts on water, wastewater, and public health. In order to address the multitude of comments, the Department of the Navy stated it is conducting a revised study. While some DOD officials offered hypothetical alternatives for training in CNMI, such as training in foreign countries, they have not yet conducted any specific planning and stated that there are no Pacific-based alternatives to consider on U.S. territories. Rather, DOD officials stated that fulfillment of any of the 42 unfilled training requirements through the training ranges in CNMI would be an improvement, and they could plan for alternatives once they determine if any requirements will remain unfulfilled. Until the training issue is resolved, DOD may have to spend more time and resources to identify other, potentially more costly, locations for training Marines relocated to Guam. DOD has not fully resolved the capability deficiency of the planned runway at Camp Schwab, which will replace the 9,000-foot runway at Marine Corps Air Station Futenma but will be shorter. Mission operations at Marine Corps Air Station Futenma support operations involving a variety of fixed-wing, rotary-wing, and tilt-rotor aircraft. Marine Corps Air Station Futenma also supports the use of a runway if needed for a United Nations contingency, such as disaster response, for which U.S. Forces– Japan is a key partner. The proposed runway at Camp Schwab will not adequately support these same mission requirements, according to Marine Corps officials. Instead, there will be two 5,900-foot V-shaped runways that, according to Marine Corps officials, will be too short for certain aircraft. As we reported in March 1998 and is still the case based on our discussions with Marine Corps officials, the loss of Marine Corps Air Station Futenma’s runway equates to the loss of an emergency landing strip for fixed-wing aircraft in the area and the loss of the United Nations use of a runway. According to an official from the Office of the Under Secretary of Defense for Policy, the office has not yet developed a plan for other alternate runways in Okinawa because it is not a priority. Although it does not yet have a plan for other alternate runways in Okinawa, DOD did take an initial step in April 2014 when it sent a letter to the government of Japan seeking approval for bilateral site surveys for locations that could support contingency operations. While a good first step, this letter did not specifically focus on other alternatives in Okinawa—only 1 of the 12 options was located in Okinawa, and some suggested alternatives were located over 1,500 miles away. Moreover, not all of the site surveys have been completed, and Marine Corps and U.S. Forces–Japan officials we spoke with stated that the need remained for alternate runways to be identified. As indicated by Unified Facilities Criteria 2-100-01, DOD has identified limitations and conditions that affect Camp Schwab’s installation’s ability to execute mission support with respect to the runway. Although Marine Corps and Pacific Command officials said the government of Japan is ultimately responsible for replacing the lost requirements by providing a longer runway elsewhere, DOD could be identifying other runways in Okinawa that would support mission requirements, which it could present to the government of Japan to help resolve this issue. By planning to construct a runway at Camp Schwab that does not have the needed capabilities, and until the site surveys are completed and an alternate runway is selected to replace those needed capabilities, DOD risks not supporting needed mission requirements and the issue remains unresolved. DOD has not resolved challenges related to the rotation of Marines to Australia, including seasonal changes (i.e., where to operate in the rainy season) and equipment downtime that will likely affect capabilities and increase costs (see fig. 6). DOD has not resolved where Marine units will be stationed during the rainy season (November to April) because, according to Office of the Under Secretary of Defense for Policy and Marine Corps officials, it is still early in the planning process and those plans are not yet a priority. Flooding during the rainy season is a significant issue in the Darwin area, as seen in figure 7. Presently, some of the rotational force is returning to Okinawa, but they will need to find a new location as the Marine Corps presence on Okinawa is reduced. DOD officials are considering multiple options for the Marines’ location during the rainy season, but no decisions have been made, and the options being considered will take years to implement. Without infrastructure planning to support mission requirements, as identified in the Unified Facilities Criteria 2-100-01, the Marine Corps risks not having the necessary infrastructure to fulfill its needed capabilities. By not resolving this capability deficiency now, DOD does not know what the financial or operational consequences will be for this decision, and decision makers in DOD and Congress cannot plan accordingly to help ensure sufficient funding is in place to support the operational and facility requirements of that location. Moreover, DOD has not resolved what to do about the government of Australia’s biosecurity requirements that affect equipment downtime. According to officials at Pacific Command, the biosecurity requirements could result in some Marine Corps equipment being nonoperational for approximately 2 months out of the 6-month rotation. DOD documentation discusses Australian biosecurity requirements regarding weeds, pests, and diseases. According to government of Australia and DOD officials, equipment that enters Australia is subject to inspection and cleaning due to the country’s biosecurity requirements. Marine Corps officials stated that, during the approximately 2 months it generally takes to break down, clean, and reassemble the Marine Corps equipment, the equipment is not functional and this hinders capability and training. Officials with the Office of the Under Secretary of Defense for Policy stated that the biosecurity requirements are a risk to the Marine Corps units’ capability. Marine Corps officials stated that leaving a set of equipment in Australia is one option being considered to ease these requirements. However, according to a senior Pacific Command official and officials with III Marine Expeditionary Force, this is an expensive option and also requires a location for the equipment to be stored. Pacific Command and Marine Corps officials stated that the Marine Corps has identified an additional equipment set that could be left in Australia to minimize biosecurity inspection requirements, but challenges remain to fund and source this equipment. Unified Facilities Criteria 2-100-01 identifies that DOD should plan its infrastructure needs to support mission requirements. By not resolving the selected identified capability deficiencies associated with equipment downtime prior to operating in Australia, the Marine Corps risks not having the equipment needed to conduct its mission since, depending on the course of action, it could take years to allocate resources to mitigate this issue. As of December 2016, DOD has not resolved selected identified capability deficiencies in the four areas noted above. According to Office of the Under Secretary of Defense for Policy and Marine Corps officials, some of these deficiencies have not been resolved because it is still early in the planning process. Even though the relocation of Marines from Okinawa to other locations is years away, this does not preclude DOD from taking action to resolve selected capability deficiencies in the identified four areas. It is important to resolve these identified capability deficiencies in the near term because it can take many years to plan, allocate resources, and develop facilities. If DOD does not resolve the identified capability deficiencies in these four areas, the Marine Corps may be unable to maintain its capabilities or face much higher costs to do so. DOD has taken steps to develop infrastructure plans and schedules for the proposed locations for the relocation of Marines from Okinawa; however, we found that the Marine Corps’ schedule for Guam did not meet the characteristics of a reliable schedule identified in the GAO Schedule Assessment Guide. With respect to risk planning, the Navy plans to establish an office to address coordination and communication of risks associated with its infrastructure planning in CNMI, but the Marine Corps has not completed risk planning for its construction efforts in Guam, and the Navy has completed limited planning for sustainment of infrastructure in Okinawa. DOD has taken steps to develop infrastructure plans for relocations to Guam, CNMI, Japan, Hawaii, and the rotational presence of Marines in Australia. In Guam, CNMI, and Japan, DOD developed plans that identified alternatives for its infrastructure in each location, such as the development of base configuration and environmental analyses. Moreover, DOD has developed plans for infrastructure requirements that will support the planned relocation to Hawaii and rotational presence of Marines in Australia. DOD has developed plans that outline the base configuration and environmental impacts of the infrastructure that will support the relocation of Marines to Guam. In June 2014, the Navy developed a master plan, which is a plan that outlines the infrastructure configuration, requirements, and construction sequence, for the relocation to Guam. In July 2015, the Navy conducted an analysis that outlined the environmental impacts of the relocation to Guam, issuing a final supplemental environmental impact statement, which changed the location of military family housing in Guam from the location identified in the master plan, specifically from the Naval Base Guam Telecommunications Site Finegayan to Andersen Air Force Base. DOD officials told us they expect this alternative to be cheaper than the initial proposal since DOD will be constructing military family housing using existing utilities. In addition, DOD officials stated that this alternative would reduce the impact on endangered species and thus the need for environmental mitigation and the costs associated with it. In April 2015, the Navy released a draft environmental impact statement to the public that identified its preferred alternative for live-fire training ranges on Tinian and Pagan, two islands that are a part of a chain that make up CNMI. The draft environmental impact statement received more than 27,000 comments from the people and government of CNMI. DOD officials stated that the people and the government of CNMI had expressed concerns over the potential effect on public infrastructure on Tinian and cultural sites on Pagan. According to Navy officials, they have tentative plans to release a revised draft environmental impact statement in November 2017 that takes into account the concerns raised by the people and the government of CNMI, with the final environmental impact statement expected in April 2019. However, DOD officials added that this date could change if DOD determines it needs to conduct additional studies. DOD has taken steps to complete infrastructure plans in Japan, including developing bilateral plans and master plans for the infrastructure related to the Marine realignment. In April 2013, the United States and the government of Japan released a bilateral plan for the consolidation of infrastructure in Okinawa related to the Marine realignment, which identified the land areas that DOD plans to return to Okinawa, general time frames for those returns, and the sequence of steps that will need to occur to facilitate those returns. According to Marine Corps officials, they had plans to update the bilateral plan with additional details before the end of 2016, including potential updates to dates for land returns in Okinawa. Officials with the Office of the Under Secretary of Defense for Policy stated that U.S. Forces–Japan began talks with the government of Japan in late 2016 about revising the bilateral plan, but as of January 2017 there was no combined work product or documentation. In preparing for the various Asia-Pacific realignment activities, DOD has also developed master plans that identified its development strategy to meet Okinawa consolidation objectives. DOD has developed some initial infrastructure requirement plans for both Hawaii and Australia. DOD officials told us that they prioritized planning for Guam over planning for Hawaii or Australia, as DOD is using money from the government of Japan for the relocation to Guam. In preparation for future master plans and environmental analyses, DOD has developed some initial infrastructure assessments for the relocation of Marines to Hawaii and for expanded rotations to Australia. In December 2014, the Navy completed a siting plan for Hawaii, which provided an analysis of opportunities for future growth of existing installations and new construction on DOD-owned land in Hawaii that would support a Marine relocation. Marine Corps officials plan to use the Hawaii siting plan as a starting point for the development of future infrastructure plans. Additionally, DOD has completed two infrastructure studies that identify Marine Corps’ requirements for housing and for aircraft support for an expansion of Marine rotations in Darwin, Australia. Moreover, DOD officials told us that they began developing a master plan for the infrastructure that will support Marine rotations to Australia. The Marine Corps has taken steps to develop integrated master schedules—schedules used for planning, executing, and tracking the status of a program—for the realignment efforts in Japan and relocation to Guam. The Marine Corps is developing master schedules for its realignment activities in Okinawa; hence, we did not evaluate the reliability of these schedules. We also did not assess the reliability of the integrated master schedule for Marine Corps Air Station Iwakuni because most of the construction projects for this base had already begun. In reviewing the Marine Corps’ integrated master schedule for Guam from July 2016, we found that the schedule does not meet all of the characteristics of a reliable schedule—comprehensive, well-constructed, credible, and controlled—identified as best practices in the GAO Schedule Assessment Guide. A reliable schedule allows program management to decide between possible sequences of activities, determine the flexibility of the schedule according to available resources, predict the consequences of managerial action or inaction in events, and allocate contingency plans to mitigate risk. Further, the success of a program depends in part on having an integrated and reliable master schedule that defines when and how long work will occur and how each activity is related to the others. Our analysis found that the Marine Corps’ integrated master schedule is not reliable as it did not substantially or fully meet all four of the GAO Schedule Assessment Guide’s characteristics for a reliable schedule. If any of the characteristics are not met, minimally met, or partially met, then the schedule cannot be considered reliable. We found the integrated master schedule substantially met one of the four characteristics for a reliable schedule, partially met two characteristics, and minimally met one characteristic; see table 1, below. According to Marine Corps officials, the integrated master schedule is an enterprise-level summary of resource and duration information from lower-level project schedules. Officials stated that contractors identify resources for construction activities in project schedules that the Marine Corps uses to update the integrated master schedule. However, a lower- level construction schedule examined was not fully resource loaded; in addition, the integrated master schedule includes a majority of activities unrelated to construction efforts, such as information technology and design activities. According to the GAO Schedule Assessment Guide, a schedule should reflect all resources necessary to complete the program to help ensure the program can use the schedule to make important management decisions, such as the reallocation of resources between projects. Because the reliability of an integrated schedule depends in part on the reliability of its subordinate schedules, schedule quality weaknesses—including lack of resource information—in these schedules will transfer to an integrated master schedule derived from them. If the integrated master schedule is unreliable and includes, for example, unjustified date constraints and inaccurate critical paths to key milestones, DOD may not have reliable information on potential sources of delays to support the relocation of Marines to Guam. Further, DOD may not have a reliable schedule to assess progress, identify potential problems, and promote accountability for the relocation to Guam. The Navy has taken steps to conduct risk planning for infrastructure in CNMI by establishing an office to help coordinate and communicate its infrastructure efforts. However, the Marine Corps has not completed risk planning for the construction of infrastructure in Guam through the completion of a risk-management plan, and the Navy has completed limited planning for sustainment of infrastructure in Okinawa in its master plan. Infrastructure risk planning for each location—CNMI, Guam, and Okinawa—is unique and at different stages, thus necessitating different actions and approaches by DOD. In October 2016, the Navy began establishing an office to plan for risks to proposed infrastructure in CNMI, specifically related to plans for live-fire training ranges on the islands of Tinian and Pagan. The Navy, which oversees the environmental analyses that will precede infrastructure construction in CNMI, released a draft environmental impact statement in April 2015 that discussed potential alternatives for the configuration of the live-fire training. However, the Navy is revising that draft environmental impact statement, due to concerns from the people and government of CNMI regarding the effects of the ranges on Tinian and Pagan. According to DOD officials, the concerns include the potential effects on public infrastructure in Tinian and cultural sites on Pagan. In May 2016, the Navy proposed establishing an office located on the island of Saipan in CNMI to facilitate coordination and communication between DOD and the people and government of CNMI, so that it can help address risks related to environmental impact, land acquisition, and cultural sensitivities. In October 2016, Navy officials told us they hired an individual to supervise the office in Saipan and that they have identified a physical office space. Further, Navy officials stated that they plan to hire additional staff for the office in Saipan to assist with coordination and communication with the people and government of CNMI. Marine Corps officials have conducted limited risk planning and have not completed a risk-management plan that identifies a strategy to address construction risks that may affect the cost and schedule for infrastructure in Guam. Specifically, DOD has identified risks, including construction labor shortages, explosive ordnance detection, cultural artifact discovery and preservation, and endangered-species protection, which can affect the cost or the schedule for each of the various individual projects on the island. DOD manages these risks on a project-by-project basis; however, DOD officials acknowledged that construction risks may become challenging to address as the Marine Corps begins to manage more ongoing construction projects. As of July 2016, the Marine Corps had four construction projects under way, but it will be initiating significantly more construction projects beginning in fiscal year 2018. Specifically, the Marine Corps identified that it will have 15 active construction projects in fiscal year 2018 and will increase the number of construction projects each year until fiscal year 2021 when the Marine Corps will peak at 43 active construction projects. Further, Marine Corps officials have not completed a risk-management plan that identifies a strategy for collectively addressing construction risks on Guam. A risk-management plan is a document that outlines the service’s approach to identify, analyze, handle, and monitor risks across a program. Therefore, while the Marine Corps manages risks on a project-by-project basis, the Marine Corps has not identified its strategy for the collective impact of risks to infrastructure resulting from an increase in construction projects. The following are examples of construction risks that may affect the relocation of Marines to Guam: Construction labor shortage: DOD officials identified that there is a risk of a construction labor shortage that may affect their ability to meet the labor demand necessary for the increase in construction projects. Specifically, the Navy expects that construction contractors will need to supplement their labor workforce with 2,800 foreign laborers to meet the demand for labor during the peak of construction. According to Navy and government of Guam officials, construction contractors on Guam have experienced challenges in getting approvals for H-2B visas to fill skilled labor gaps. According to data from the Guam Department of Labor, U.S. Citizenship and Immigration Services approved approximately 4 percent of H-2B visa applications for Guam between January and September 2016. According to government of Guam officials, the approval percentage for H-2B visas is significantly lower than the percentage in fiscal years 2014 and 2015, when the U.S. Citizenship and Immigration Services approved over 98 percent of H-2B visa applications for Guam. Navy officials stated that challenges in getting approval for foreign labor applications will in turn affect DOD’s ability to meet the construction labor demand for the increase in projects in fiscal year 2018. Explosive-ordnance detection: According to DOD officials, there is a risk of cost overruns or schedule delays related to the process for the detection of explosive ordnance on construction worksites. Navy officials stated that they account for cost and schedule implications related to the detection of explosive ordnance when the Navy solicits bids for projects from contractors; however, DOD officials told us that they frequently discover anomalies, such as tin cans or scrap metal, when detecting for explosive ordnance. In one instance, Navy officials stated that they had to modify the contract for a utilities project that resulted in a $4.9 million cost increase and a 10-month schedule delay because the contractor detected more anomalies that DOD had to address than predicted in the initial contract. In May 2016, the Office of the Chief of Naval Operations issued an exemption to aspects of the Navy’s guidance on the detection of explosive ordnance in an attempt to ease standards that resulted in cost overruns and schedule delays in Guam. Under the exemption, civilian construction labor does not need to evacuate a site during the detection process for explosive ordnance in certain circumstances. DOD officials stated that the exemption reduced some of the cost and schedule risks related to detecting explosive ordnance, but the current process for the detection of explosive ordnance may still affect the cost and schedule for a project. Figure 8 illustrates an example of the detection and removal of explosive ordnance at a utilities project in Guam. Cultural-artifact discovery and preservation: DOD discovery and preservation of cultural artifacts following the initiation of a project can affect that project’s cost and schedule. According to DOD officials, they plan for potential costs and time needed for artifact discovery and preservation in the construction contracts for particular projects, but there may be additional costs or schedule delays after they discover artifacts on construction sites. For example, the Marine Corps has plans to build a live-fire training range on the northwest end of Guam that may require the discovery and preservation of artifacts on 21 sites that, according to the Navy, are eligible for listing on the National Register of Historic Places, which may result in additional costs or schedule delays. Navy officials noted that they have taken steps to streamline the documentation of its artifact discovery and preservation process in preparation for each site, but they expect challenges in meeting cultural-artifact discovery and preservation requirements. Figure 9 shows examples of artifacts discovered during construction at various DOD sites in Guam. Endangered-species protection: According to the Navy, DOD has experienced schedule delays as it has waited for the Fish and Wildlife Service to complete biological opinions that outline protection strategies for endangered species located in construction areas. For example, DOD experienced delays on two construction projects due to the discovery of endangered orchid and butterfly species on site, which, according to the Navy, has caused delays in awarding the contracts for both construction projects. The Marine Corps has not completed its risk-management plan for Guam infrastructure. In October 2015, the Marine Corps began developing its risk-management plan, defining roles and responsibilities for risk planning efforts in Guam. Based on our review of the draft risk-management plan—which has been included in the Guam program management plan—we found that the Marine Corps has not identified a strategy within its risk-management plan to address the four risks identified above for infrastructure in Guam, among other construction risks. Officials from the Marine Corps stated that risk is consistently assessed at multiple levels and managed through biweekly coordination meetings with all stakeholders. However, while risks may be assessed on a project-by- project basis, Marine Corps officials have not completed in the draft risk- management plan a strategy to collectively address construction risks on Guam. Officials from Pacific Command expect the identification of specific risks, assessments, and mitigations to be included in a risk assessment tool to be purchased for the Guam program. DOD guidance notes that risk management is integral to effective program management. Moreover, the guidance indicates that a risk-management plan should be developed early in a program’s formulation and notes that the plan should document an integrated approach for managing risks. Any schedule delays to the construction of infrastructure in Guam may have broader effects on other locations involved in the Asia-Pacific realignment. For example, DOD may need to support infrastructure in Okinawa for a longer period and at additional costs if risks are not planned for adequately. Without a risk-management plan that identifies the Marine Corps’ strategy for addressing risks to the infrastructure buildup in Guam, DOD may not have complete information to address risks to the design and construction of its infrastructure that may result in cost overruns and schedule delays related to the relocation of Marines. DOD has completed limited risk planning for the sustainment of infrastructure in Okinawa by developing a master plan. However, DOD did not identify its short- or long-term sustainment needs for the Marine Corps’ infrastructure in its master plan. Figure 10 shows the infrastructure DOD identified that will require sustainment while it waits on various relocation activities to take place. In June 2013, we found that DOD had not developed master plans that included sustainment plans for the majority of the infrastructure on Okinawa it would need while waiting on other, related Asia-Pacific realignment activities to take place. Therefore, we recommended that DOD update its master plans to include sustainment requirements and costs for its infrastructure on Okinawa, including short-term and long-term sustainment needs to account for uncertainty regarding the time needed to complete realignment activities. In December 2015, the Navy developed a master plan for the Marine Corps infrastructure on Okinawa. However, the Navy did not identify in the master plan short- or long-term needs to account for uncertainty regarding the time needed to complete related realignment activities as we recommended. Not identifying in the master plan short- or long-term sustainment needs puts DOD at risk of not having the information necessary to make informed decisions about maintaining its infrastructure at an acceptable level to carry out its mission. DOD guidance on real property management requires DOD components to develop master plans for installations that outline their annual construction plans for at least a 10-year period and to update the master plan at least every 5 years. Furthermore, the guidance requires that DOD components include a specific, annual listing of major repair and sustainment projects. In addition, Unified Facilities Criteria guidance regarding installation master planning indicates that installation planning and programming staff must capture facility requirements and propose solutions to meet those requirements from the options available. Therefore, we continue to believe that fully implementing our June 2013 recommendation to update Okinawa installation master plans to include short- or long-term sustainment needs is important to aid DOD in obtaining sufficient information to make prudent investment decisions for infrastructure sustainment in Okinawa. DOD improved its cost estimates for Guam since our June 2013 report by adding a documented technical baseline description and clear documentation of ground rules and assumptions for its military construction cost estimates, and including life-cycle costs for its nonmilitary construction cost estimates. However, we found that DOD’s updated cost estimates partially met the best practices for a reliable cost estimate. According to GAO’s Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs, a cost estimate is considered reliable if it fully or substantially meets the best practices of all four characteristics: comprehensive, well- documented, accurate, and credible (see fig. 11). In addition, Office of Management and Budget guidance from July 2016 containing best practices states that credible cost estimates are vital for sound management decision making and for any program to succeed. To assess DOD’s cost estimates for infrastructure in Guam, we compared DOD’s cost estimates for both military construction and nonmilitary construction activities to the best practices of the four characteristics of a reliable cost estimate. We assessed each best practice as not met, minimally met, partially met, substantially met, or fully met. We found that the cost estimates for military construction activities in Guam substantially met best practices for the comprehensive, well-documented, and accurate characteristics but minimally met best practices for the credible characteristic. In addition, we found that the cost estimates for nonmilitary construction activities in Guam partially met best practices for the comprehensive and accurate characteristics, and minimally met best practices for the well-documented and credible characteristics. Appendix IV includes our detailed assessment of DOD’s military construction and nonmilitary construction cost estimates for Guam regarding each of the best practices for the four characteristics for reliable cost estimates, including the reasons best practices were not fully met. Table 2 provides a summary of our assessment, for each of the four characteristics, of DOD’s military construction and nonmilitary construction cost estimates for Guam. DOD officials acknowledged that their cost estimates for Guam did not include all best practices for reliable cost estimates. For example, officials stated that they did not include a unifying Work Breakdown Structure for the estimates for nonmilitary construction because they do not complete a Work Breakdown Structure at the programming stage. However, according to the GAO cost estimating guide, the Work Breakdown Structure should be set up when the program is established and should become successively detailed over time, as it provides a basic framework for estimating costs, determining where risks may occur, and measuring program status. Further, officials stated that they did not resource the level of effort to conduct a risk or sensitivity analysis for the estimates for military and nonmilitary construction because it is not warranted. The GAO cost estimating guide states that a risk analysis and a sensitivity analysis are part of every high-quality cost estimate, as a risk analysis captures the cumulative effect of additional risk and a sensitivity analysis helps mitigate uncertainty by explaining how changes to key assumptions and inputs affect the estimate. In addition, officials stated that an independent cost estimate was performed for the estimates for nonmilitary construction. However, we reviewed DOD’s documentation and found that what they identified as an independent cost estimate was actually a review of a cost summary. The GAO cost estimating guide states that an independent cost estimate should be completed as it provides an independent view of expected program costs that tests the estimate for reasonableness. Without a revision of cost estimates for Guam to include all of the best practices established by GAO’s cost estimating guide, including a Work Breakdown Structure, risk and sensitivity analyses, and an independent cost estimate, decision makers in DOD and Congress will not have reliable cost information to inform their funding decisions regarding infrastructure for the Marine Corps relocation to Guam. DOD partially met the comprehensive characteristic for a reliable cost estimate for its planned infrastructure for Hawaii and Australia by documenting ground rules and assumptions associated with the military construction costs. However, DOD did not include other best practices established by the GAO cost estimating guide for the comprehensive characteristic, such as having all life-cycle costs or a Work Breakdown Structure in its cost estimates. Since the efforts for Hawaii and Australia are still early in the planning process, we did not evaluate the DOD cost estimates for infrastructure in Hawaii and Australia against the best practices for the other three characteristics of a reliable cost estimate. Table 3 provides a summary of our assessment of DOD’s cost estimates. Appendix V includes our detailed assessment of DOD’s cost estimates for Hawaii and Australia, including the reasons that DOD’s cost estimates partially met GAO’s comprehensive characteristic. According to the GAO cost estimating guide, in order for a cost estimate to be considered comprehensive, it should include government and contractor costs over the full life cycle of the program and the estimate should be based on a product-oriented Work Breakdown Structure that allows a program to track cost and schedule by defined deliverables, among other best practices. In addition, DOD guidance on economic analysis for decision making indicates that, as part of assessing the costs and benefits of alternatives, an economic analysis should include comprehensive estimates of the expected costs and benefits that are incident to achieving the stated objectives of the project. DOD officials acknowledged that their cost estimates for Hawaii and Australia did not include all best practices, such as a life-cycle cost estimate and a Work Breakdown Structure, for the comprehensive characteristic because the planning for Hawaii and Australia is still in the early stages and the cost estimates will become more detailed as the planning progresses. A DOD official stated that DOD does not plan to develop a life-cycle cost estimate for Hawaii until at least fiscal year 2018 because DOD is focused on completing the Marine relocation to Guam before beginning detailed planning for Hawaii. Based on best practices in the GAO cost estimating guide, the life-cycle cost estimate for the relocation to Hawaii should be examined and understood early in the planning process regardless of other projects, as a life-cycle cost estimate enhances early decision making and enables planning studies to be evaluated on a total-cost basis. According to the GAO cost estimating guide, a life-cycle cost estimate can support budgetary decisions, key decision points, and investment decisions. Without fully accounting for life-cycle costs, management will have difficulty successfully planning program resource requirements and making informed decisions. In addition, the GAO cost estimating guide states that the Work Breakdown Structure should initially be set up when the program is established and should become successively detailed over time, as it provides a basic framework for estimating costs, determining where risks may occur, and measuring program status. Without a Work Breakdown Structure, the program lacks a framework to develop a schedule and cost plan that can easily track resources spent and completion of activities and tasks. Without a revision of cost estimates for Hawaii and Australia to include all of the best practices established by GAO’s cost estimating guide for the comprehensive characteristic, decision makers in DOD and Congress will not have reliable cost information to inform their funding decisions regarding infrastructure for Hawaii and Australia and to help them determine the viability of the relocation of Marines to Hawaii and the establishment of a rotational presence in Australia. The ability of DOD to coordinate its multiple relocation efforts and maintain the operational capabilities of its forces is important to the success of the U.S. presence in the Asia-Pacific region. DOD has developed a high-level synchronization plan and organized working groups that coordinate the relocation of Marines to Okinawa, but DOD has not fully resolved selected identified capability deficiencies associated with the relocation of Marines. If DOD officials do not resolve the selected identified capability deficiencies, they may be challenged in maintaining operational capabilities and could face higher costs in order to do so. It is important to resolve these selected identified capability deficiencies in the near term because it can take many years to plan, allocate resources, and develop facilities. DOD has taken steps to develop its infrastructure plans for the relocation of Marines from Okinawa, such as the development of plans that identified alternatives for its infrastructure in Guam, CNMI, and Japan and the initial infrastructure plans for Hawaii and Australia. However, the Marine Corps’ infrastructure schedule for Guam does not meet GAO’s best practices for a reliable schedule. Without a reliable integrated master schedule, DOD may not have reasonable assurance of the reliability of information on current progress as well as potential sources of delays for the design and construction of infrastructure to support the relocation of Marines to Guam. Furthermore, DOD does not have a reliable schedule to assess progress and identify potential problems for the relocation to Guam. In addition, the Marine Corps has not completed its risk- management plan for Guam that documents its strategy for how it will address known construction risks, among other risks that may be present. Without a risk-management plan that identifies the Marine Corps’ strategy for addressing risks to the infrastructure buildup in Guam, DOD will not have the information necessary to address risks for its infrastructure design and construction that will likely result in cost overruns and schedule delays related to the relocation. Moreover, DOD has taken steps to implement our June 2013 recommendation to update Okinawa installation master plans, but it has not identified short- or long-term sustainment needs for facilities in Okinawa. By fully implementing our June 2013 recommendation to include short- or long-term sustainment needs, DOD would be better positioned to mitigate infrastructure sustainment risks in Okinawa and could better ensure that facilities are adequate to carry out its mission until related realignment activities are completed. DOD would also limit its risk of experiencing cost overruns resulting from having to sustain facilities longer than expected because of delays or uncertainties related to other Asia-Pacific relocation activities that officials project will need to occur before consolidating infrastructure. DOD has made overall progress in developing its cost estimates for Guam since June 2013, but its estimates partially met best practices for reliable cost estimates for infrastructure in Guam, Hawaii, and Australia. Specifically, the cost estimates for Guam do not include a unifying Work Breakdown Structure, risk and sensitivity analyses, and an independent cost estimate. The cost estimates for Hawaii and Australia do not include a life-cycle cost estimate or a Work Breakdown Structure. Without a revision of current cost estimates for Guam, Hawaii, and Australia to fully address all of the best practices established by GAO’s cost estimating guide, decision makers in DOD and Congress will not have reliable cost information to inform their funding decisions and to help them determine the viability of these options for the relocation and the establishment of a rotational presence. We recommend that the Secretary of Defense take the following nine actions. To improve the Department of Defense’s ability to maintain its capability in the Asia-Pacific region, we recommend that the Secretary of Defense direct the appropriate entities to resolve selected identified capability deficiencies associated with the relocation in four areas: the movement of Marine Corps units by, for example, reconsidering when units should move to Guam to minimize leaving facilities vacant; training needs in Iwakuni, Hawaii, and CNMI by, for example, identifying other suitable training areas; reduction in runway length at the Futenma Replacement Facility by, for example, selecting other runways that would support mission requirements; and challenges in Australia regarding seasonal changes and biosecurity requirements that affect equipment downtime by, for example, deciding on a location for the wet season and identifying a solution for biosecurity requirements. To provide DOD with reliable information on potential sources of delays for the design and construction of infrastructure in Guam, we recommend that the Secretary of Defense direct the appropriate entities to update the Marine Corps’ integrated master schedule for Guam so that it meets the comprehensive, well-constructed, and credible characteristics for a reliable schedule. For example, the update to the schedule should include resources for nonconstruction activities. To provide DOD and Congress with sufficient information to mitigate risks for infrastructure construction and sustainment, we recommend that the Secretary of Defense direct the appropriate entities to complete a Risk Management Plan for Guam, and include, at a minimum, plans to address: (1) construction labor shortages, (2) explosive--ordnance detection, (3) cultural-artifact discovery and preservation, and (4) protection of endangered species. To provide DOD and Congress with more-reliable information to inform funding decisions associated with the relocation of Marines to Guam, we recommend that the Secretary of Defense direct the appropriate entities to revise the cost estimates for Guam to address all best practices established by GAO’s cost estimating guide. Specifically, the revisions to the cost estimates should include: a unifying Work Breakdown Structure, risk and sensitivity analyses, and an independent cost estimate. To provide DOD and Congress with more-reliable information to inform funding decisions associated with the relocation of Marines to Hawaii and the establishment of a rotational presence in Australia, we recommend that the Secretary of Defense direct the appropriate entities to revise the DOD cost estimates for Hawaii to address all best practices for the comprehensive characteristic established by the GAO cost estimating guide, specifically to capture entire life-cycle costs and develop a Work Breakdown Structure and revise the DOD cost estimates for Australia to address all best practices for the comprehensive characteristic established by the GAO cost estimating guide, specifically to capture entire life-cycle costs and develop a Work Breakdown Structure. We provided a draft of this report for review and comment to DOD and the Department of State. In written comments, DOD concurred with two recommendations, partially concurred with six recommendations, and nonconcurred with one recommendation. After receiving a draft of the sensitive report in December 2016, DOD provided additional information and documentation in January and February 2017 based on new developments in the bilateral negotiations between the governments of the United States and Australia, actions taken by DOD during our review in response to our draft report, and roles and responsibilities in the Asia- Pacific region. As a result of our review of the documentation provided and discussions with officials, we revised some of our findings to reflect this additional information, and we revised the wording of some of our recommendations. Specifically, in discussions in January 2017, DOD officials raised concerns about the stakeholders to whom we directed our recommendations, noting that multiple stakeholders have roles in the relocation. We agree there are multiple stakeholders and modified some recommendations to allow the Secretary of Defense to direct the appropriate entities to implement the recommendations, rather than identify the specific stakeholders. Additionally, we removed one finding and its related recommendation regarding challenges reaching an agreement between the United States and Australia relating to the mission of the Marine Corps units in Australia, given new documentation provided by DOD and updates in the bilateral negotiations. DOD’s comments on this report are summarized below and reprinted in their entirety in appendix VI. In e-mail, the audit liaison from the Department of State indicated that the department did not have formal comments. DOD and the Department of State also both provided technical comments, which we incorporated as appropriate. DOD partially concurred with our first four recommendations that the Secretary of Defense direct the appropriate entities to resolve selected identified capability deficiencies associated with the movement of Marine Corps units; training needs in Iwakuni, Hawaii, and CNMI; reduction in runway length at the Futenma Replacement Facility; and challenges in Australia regarding seasonal changes and biosecurity requirements. In its letter, DOD stated that the Marine Corps has already addressed, where applicable, the selected identified capability deficiencies. We disagree that the Marine Corps has addressed these capability deficiencies, given the ongoing concerns as noted in our report. Moreover, in January 2017, both the Marine Corps and Pacific Command provided additional documents to us stating that the four selected identified capability deficiencies were not yet resolved, and we address the specific points in the following paragraphs related to each recommendation. With regard to our first recommendation that the Secretary of Defense direct the appropriate entities to resolve selected identified capability deficiencies associated with the movement of Marine Corps units, DOD stated that the Marine Corps’ plans for movement of units from Okinawa to Guam has considered many factors, including, among others, the capabilities required to support Pacific Command and the logistical requirements associated with the movement of forces. In its response, DOD stated it disagrees with our assessment that adequate planning with regard to minimizing operational downtime of III Marine Expeditionary Force during the movement to Guam has not been done. Rather, DOD stated that both the Marine Corps and Pacific Command have done extensive planning and analysis to determine how best to posture, move, and support forces from III Marine Expeditionary Force. In its response, DOD further noted Pacific Command’s explanation that the existing plan cannot be considered fixed and final because of the requirement to adapt to changing conditions. DOD also noted that those conditions do not materially impact the infrastructure required. DOD added that the pace at which this movement is executed will continue to take into account the rate at which the required infrastructure is developed. Moreover, DOD’s response stated that the Marine Corps is already working to ensure that its plan is continually refined to balance fiscal and construction realities with operational risk, capability requirements, and readiness. Although DOD has taken initial steps to consider how to move Marine Corps units from Okinawa to Guam, we continue to believe it has not yet fully resolved this capability deficiency. We agree that DOD has taken some steps to analyze capability deficiencies regarding the movement of Marine Corps units, and we stated in our report that Marine Corps Forces Pacific conducted simulated wartime scenarios to assess the capability concerns that had been expressed by III Marine Corps Forces. However, as we also stated in our report, DOD has not completed its analysis or reached any decisions on how to move the forces. Further, as we stated, DOD anticipates that it will soon be rapidly increasing the number of construction projects in Guam, increasing from 4 projects as of July 2016 to 15 projects in fiscal year 2018. Those projects, which are already in the planning and development stage, will be affected if DOD has not made decisions on the movement of forces. Further, any changes could result in costly adjustments to the construction if decisions are made too late or could result in vacant facilities if the movement of units needs to be adjusted. DOD has not provided us evidence that, if plans are adapted to changing conditions, the effect on infrastructure will be minimal; in contrast, we have historically found that infrastructure changes can be costly to the department. Moreover, in January 2017, Marine Corps and Pacific Command officials continued to express concerns that decisions with regard to force structure and positioning of forces will ultimately affect facility planning adjustments. As a result, until DOD resolves how to move units from Okinawa to Guam, it risks hindering its mission requirements during the relocation. With regard to our second recommendation that the Secretary of Defense direct the appropriate entities to resolve selected identified capability deficiencies associated with training needs in Iwakuni, Hawaii, and CNMI, DOD stated that it has already conducted an extensive analysis of training needs. Specifically concerning training requirements for CNMI, DOD stated that Pacific Command identified 42 combatant command– level training deficiencies to be fulfilled through the development of training ranges in Pacific Command’s area of responsibility. DOD added that, due to the complexity and scale of these training deficiencies, CNMI emerged as the only viable location on U.S. territory to address these deficiencies. DOD further stated it disagrees that a study to reexamine these and other potential training locations in the event that DOD is not able to meet all of its identified training requirements in CNMI is warranted or worthwhile years prior to the development of new training ranges in the CNMI. With respect to the department’s assertion that DOD has already conducted an extensive analysis of training needs for the Marine Corps and the joint force in Iwakuni, Hawaii, and CNMI, we disagree. The assertion is contrary to evidence provided to us in documents and discussions we held with DOD officials. In particular, in February 2017, officials from U.S. Forces–Japan said that bilateral agreement was reached to establish a working group to study other possible locations beyond Kanoya Air Base for training, thus indicating that identification of other training locations near Iwakuni has not yet been resolved. With respect to Hawaii, in April 2016, Marine Corps officials told us they had not identified a timeline for when they plan to develop training plans, and in January 2017 Marine Corps officials added that there is significant work to be done to fully determine training requirements and conduct planning to meet those requirements. With respect to CNMI, in January 2017 both Pacific Command and the Marine Corps stated that DOD has not fully resolved the challenges associated with training areas. As noted in our report, the department received more than 27,000 comments in response to the draft environmental impact statement, and to address the multitude of comments the Department of the Navy stated it is developing a revised draft environmental impact statement. However, the Marine Corps synchronization matrix, as of June 2016, still showed construction scheduled to begin in Tinian as soon as 2017. We continue to believe that DOD should take actions to resolve capability deficiencies associated with training needs in Iwakuni, Hawaii, and CNMI; otherwise, it may take additional time, effort, and resources to resolve these deficiencies and it is uncertain whether the Marine Corps units will be able to complete necessary training in these locations. With regard to our third recommendation that the Secretary of Defense direct the appropriate entities to resolve selected identified capability deficiencies associated with the reduction in runway length at the Futenma Replacement Facility, DOD stated that it disagreed that the length of the runway planned at the Futenma Replacement Facility is a capability deficiency for the Marine Corps. DOD stated that, at the time of its agreement with Japan, it understood that the Futenma Replacement Facility would not possess a long runway and that the Marine Corps drove the final requirements to support the capabilities required for their missions at the Futenma Replacement Facility. While we agree that the shorter runway is not a deficiency for the Marine Corps, it is a deficiency that is ultimately connected with infrastructure plans for the Marine Corps in the context of relocation—specifically, infrastructure plans associated with Marine Corps relocation from Marine Corps Air Station Futenma. As such, we directed our recommendation to the Secretary of Defense to direct the appropriate entities for whom the shorter runway is a deficiency. As we wrote in our report, the shorter runway equates to the loss of an emergency landing strip for fixed-wing aircraft in the area and the loss of the United Nations use of a runway. These capability deficiencies affect the Air Force and U.S. Forces–Japan and have not yet been resolved. Additionally, as we stated in our report, senior officials from U.S. Forces–Japan said that, given the large Japanese investment into the Futenma Replacement Facility, it may be likely that the United States becomes pressured by the government of Japan to return Marine Corps Air Station Futenma even if the replacement runway deficiency is not resolved. If this return were to occur without a replacement runway identified, DOD mission capabilities could be hindered. Until this deficiency is resolved, DOD may be unable to maintain all mission capabilities or face higher costs to do so. With regard to our fourth recommendation that the Secretary of Defense direct the appropriate entities to resolve selected identified capability deficiencies associated with challenges in Australia regarding seasonal changes and biosecurity requirements that affect equipment downtime, DOD stated that these factors are not capability deficiencies but rather real-world constraints around which DOD and Australia are working to develop the most bilaterally beneficial annual program possible. DOD also stated that the Marine Corps continues to coordinate closely with the Australian Department of Agriculture, Fisheries, and Forestry to develop best practices to train Marines as assistant inspectors to minimize the cost, in time and money, to conduct biosecurity inspections. We agree that the department likely understood these issues when it first began planning for the rotational presence to Australia, but knowing about these issues does not negate the fact that DOD has not yet determined how it plans to resolve them. These issues remain relevant to the Marine Corps, as it will need to determine where to place up 2,500 Marines when some units can no longer return to Okinawa and how to reduce readiness risks when its equipment is unusable due to biosecurity screening requirements. As we noted in our report, DOD officials are considering multiple options for the wet season, but no decisions have been made, and Marine Corps officials have identified constraints for each option being considered. Moreover, as stated in our report, in January 2017 Pacific Command and Marine Corps officials stated that challenges remain to fund and source a dedicated equipment set. Initial force flow has already begun, and the cost-sharing arrangement between the governments of the United States and Australia was signed in January 2017, which will likely allow for construction decisions to be made in the near term. DOD has the opportunity now—before force flow increases and DOD spends additional effort and resources—to make prudent decisions to avoid needing to make costly corrections later. As a result, we continue to believe that DOD should take actions to resolve these challenges in Australia in order to help ensure that its plans are fully developed and resources are identified so that DOD and Congress can make prudent and informed funding decisions to resolve these challenges. DOD concurred with our fifth recommendation that the Secretary of Defense direct the appropriate entities to update the Marine Corps’ integrated master schedule for Guam so that it meets the comprehensive, well-constructed, and credible characteristics for a reliable schedule. In its response, DOD stated that, in September 2016, it began updating its integrated master schedule based on our review to conform to the GAO Schedule Assessment Guide and plans to adopt the best practices of assigning resources and establishing activity durations to ensure the schedule is comprehensive. Also, DOD plans to continue to work to verify that the schedule can be traced horizontally and vertically and conduct a schedule risk analysis. If fully implemented, we believe that DOD’s proposed actions will better provide DOD with reliable information on potential sources of delays for the design and construction of infrastructure in Guam. DOD concurred with our sixth recommendation that the Secretary of Defense direct the appropriate entities to complete a risk-management plan for Guam, and include, at a minimum, plans to address: (1) construction labor shortages, (2) explosive-ordnance detection, (3) cultural-artifact discovery and preservation, and (4) protection of endangered species. In its response, DOD cited actions it has previously taken and plans to mitigate risks for infrastructure construction and sustainment, such as coordinating with the U.S. Citizenship and Immigration Services to address foreign-worker visas, approving an explosive-safety exemption for construction projects in Guam and CNMI, and developing a monitoring and mitigation tracking plan to ensure Navy compliance and execution of environmental requirements. These past and planned actions, as well as DOD’s concurrence with our recommendation, should better address risks to the design and construction of its infrastructure and, in turn, reduce the potential for cost overruns and schedule delays. DOD nonconcurred with our seventh recommendation that the Secretary of Defense direct the appropriate entities to revise the cost estimates for Guam to address all best practices established by GAO’s cost estimating guide. In its response, DOD stated that the department does not accept the assertion that GAO’s best practices are universally applicable to a wide range of activities that includes military construction, acquisition, or basing. DOD stated that the Guam program was developed and communicated to Congress consistently with statute and the department’s long-standing supporting policies. Specifically, DOD noted that DOD Financial Management Regulation, Volume 2B, Chapter 6, requires inclusion of a form for each project submitted with the budget request, containing certain information. According to DOD, per this guidance, a contractor develops a detailed Work Breakdown Structure when the construction contract is awarded, which is much later in the project execution timeline than our expectations. DOD further stated that it is unrealistic for DOD to develop detailed Work Breakdown Structures for over 100 independent construction projects prior to any construction project getting under way. Moreover, DOD states that it provides sufficient information to support military construction decisions, and in cases where Congress desires additional information on a particular project, it routinely requests and receives that information. We continue to believe that our cost estimating guide provides a consistent methodology that is based on best practices and that can be used across the federal government—including DOD—for developing, managing, and evaluating capital program cost estimates. Moreover, as noted in our report, there is no Work Breakdown Structure to tie the cost estimates and schedule together. A Work Breakdown Structure is the cornerstone of every program because it defines in detail the work necessary to accomplish a program’s objectives, and it provides a consistent framework for planning and assigning responsibility for the work. Further, we do not state that DOD should develop detailed Work Breakdown Structures for over 100 independent construction projects. Rather, we state that DOD should have a unifying Work Breakdown Structure to align the Guam Rainbow Chart—DOD’s program- management tool that summarizes detailed program inputs—to the schedule or the cost estimate. Per GAO’s cost estimating guide, a Work Breakdown Structure should be initially set up when the program is established and becomes successively detailed over time as more information becomes known about the program. In its response, DOD did not dispute our findings and related recommendation that the revisions to the Guam cost estimates should include risk and sensitivity analyses and an independent cost estimate; we believe these revisions remain relevant as well. We continue to believe that, without a revision of cost estimates for Guam to include the best practices established by GAO’s cost estimating guide, decision makers in DOD and Congress will not have reliable cost information to inform their funding decisions regarding infrastructure for the Marine Corps relocation to Guam. Finally, DOD partially concurred with our eighth and ninth recommendations that the Secretary of Defense direct the appropriate entities to revise the DOD cost estimates for Hawaii and Australia to address all best practices for the comprehensive characteristic established by the GAO cost estimating guide, specifically to capture entire life-cycle costs and develop a Work Breakdown Structure. In its response, the department agreed that good cost estimating practices are prudent for good decision making but did not agree that it should expend effort to update its cost estimates for the Hawaii and Australia programs due to reasons of timing, in the case of Hawaii, and international agreements, in the case of Australia. Specifically, DOD stated that, for Hawaii, high-level cost estimates are sufficient at this early planning stage and a detailed Work Breakdown Structure is not needed. Moreover, in its response, DOD stated that it disagrees with what constitutes the program life cycle. DOD stated it believes that the program is complete when forces move and occupy the new facilities. Regarding Australia cost estimates, DOD stated in its response that the costs borne by DOD under this program will be subject to international agreement rather than the GAO cost estimating guide. Per GAO’s Cost Estimating and Assessment Guide, we are not recommending a Work Breakdown Structure for specific construction projects, but rather a Work Breakdown Structure that combines all of the different projects involved in the overall program. We continue to believe that DOD should develop a Work Breakdown Structure that lays out the costs at a high level so that DOD can easily see and track accomplishments. Then, as the program continues, DOD can add detail to those areas of the Work Breakdown Structure when they are further defined. Additionally, life-cycle costing enhances decision making, especially in early planning and concept formulation of acquisition. While DOD notes that it incorporates best practices for minimizing facility maintenance and sustainment costs into its construction costs, a full life- cycle cost estimate is important in budgetary decisions, key decision points, milestone reviews, and investment decisions. Without considering operations and support throughout the entire life cycle, DOD is not considering all possible costs of what the facilities will cost over time. With regard to Australia’s cost estimate, costs could still be identified in a Work Breakdown Structure and then later assigned to either the United States or Australia. We continue to believe that revising cost estimates for Hawaii and Australia to include all of the best practices established by GAO’s cost estimating guide for the comprehensive characteristic will better enable decision makers in DOD and Congress to make informed funding decisions and determine the viability of the relocation of Marines to Hawaii and the establishment of a rotational presence in Australia. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; and the Department of State. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. The objectives of our review were to examine the extent to which the Department of Defense (DOD) has (1) coordinated its efforts and resolved selected identified capability deficiencies related to the relocation of Marines from Okinawa, (2) developed infrastructure plans and schedules for its relocation efforts and completed risk planning for its infrastructure that will support the relocation, and (3) developed reliable cost estimates for infrastructure for the relocation to Guam and Hawaii and for the rotational presence in Australia. This report is a public version of a sensitive report that we are issuing concurrently. DOD deemed some of the information in the sensitive report as For Official Use Only, which must be protected from public disclosure. Therefore, this report omits For Official Use Only information and data on some of the Navy and Marine Corps plans and programs associated with the realignment effort, deployment and allies’ considerations, and estimates of future actions and political concerns associated with Marine Corps forward stationing. Although the information provided in this report is more limited in scope, it addresses the same objectives as the sensitive report. Also, the methodology used for both reports is the same. For all objectives, we scoped our review to actions taken since GAO last reviewed Marine Corps realignment initiatives in the Asia-Pacific region in June 2013. We reviewed relevant policies and procedures, and collected information by interviewing and communicating with officials from the Office of the Under Secretary of Defense (Policy), the Office of the Under Secretary of Defense (Comptroller), the Air Force, the Army, the Navy, the Marine Corps, and the State Department. We also conducted site visits in the following areas: Hawaii, where we met with Pacific Command and its service components; Japan, where we met with U.S. Forces– Japan and the services, Marine Corps Installation Command Pacific, III Marine Expeditionary Force, the U.S. Embassy in Tokyo, and the U.S. Consulate on Okinawa, and observed infrastructure conditions in Okinawa and Iwakuni; and Guam, where we met with DOD and government of Guam officials, and observed infrastructure conditions and the buildup of Marine Corps Base Guam. Additionally, we interviewed DOD officials and officials from the U.S. Embassy in Australia. We also met with DOD’s construction agents, specifically the U.S. Army Corps of Engineers and the Naval Facilities Engineering Command. To determine the extent to which DOD has coordinated efforts and resolved selected identified capability deficiencies related to the relocation of Marines from Okinawa, we reviewed DOD documentation and interviewed knowledgeable officials. Specifically, we reviewed documentation such as the Marine Corps’ Asia-Pacific Realignment Synchronization Matrix; capability documents such as bilateral agreements between the United States and Japan or Australia as well as training requirement documentation; and other documentation including program management plans for the various locations supporting the relocation. We reviewed capability deficiencies that were identified by DOD through interviews. We compared DOD’s decision-making process for plans to resolve the identified capabilities to DOD Unified Facilities Criteria regarding identifying mission needs to determine land and facility support requirements. To determine the extent that DOD has developed plans and schedules for its relocation efforts and completed risk planning for its infrastructure, we reviewed DOD guidance related to the development of installation plans, integrated master schedules, and risk planning. We identified current infrastructure plans and integrated master schedules. Specifically, we assessed the Guam integrated master schedule to determine whether this schedule reflects best practices needed to implement a program as well as the extent to which projects and activities were properly sequenced. GAO schedule specialists reviewed the Guam schedule and compared it with best practices in GAO’s Schedule Assessment Guide to determine the extent to which it reflects 10 key schedule estimating practices that are fundamental to having a reliable schedule. These practices address whether the schedule (1) captured all activities, (2) sequenced all activities, (3) assigned resources to all activities, (4) established the duration of all activities, (5) can be traced horizontally and vertically, (6) established a valid critical path, (7) identified reasonable total float between activities, (8) identified a level of confidence using a schedule risk analysis, (9) was updated using progress and logic to determine dates, and (10) maintained a baseline schedule. To do so, we independently assessed the program’s integrated master schedule compared to these 10 best practices and determined an assessment rating for each best practice. Then we determined an overall assessment rating for the 4 characteristics of a reliable schedule based on averages of the 10 best practices. When the program office made updates to the integrated master schedule, we conducted our review again to reflect those updates. We also received two detailed construction project schedules and assessed them for resource assignments. In addition, we interviewed cognizant program officials to discuss their use of best practices in creating the program’s current schedule to better understand how the schedule was constructed and maintained. Moreover, we reviewed documentation and conducted interviews with DOD officials to determine any identified risks to the schedule and actions DOD has taken to address those risks. We compared DOD’s risk-planning efforts outlined in that documentation to DOD guidance on addressing risk, such as guidance that identifies the characteristics needed in a risk-management plan and guidance on how DOD plans for infrastructure sustainment in base master plans. To determine the extent to which DOD has developed reliable cost estimates for infrastructure for the relocation to Guam and Hawaii and for the rotational presence in Australia, we reviewed DOD’s cost estimates and analyses and interviewed DOD and Department of State officials about costs and funding sources related to infrastructure in locations considered for relocation. GAO cost estimation specialists compared those estimates and analyses to the best practices included in GAO’s Cost Estimating and Assessment Guide. We also reviewed the Office of Management and Budget’s Capital Programming Guide, and DOD’s guidance on Economic Analysis for Decision-making, which support our best practices for developing reliable cost estimates. Specifically, GAO’s Cost Estimating and Assessment Guide identifies best practices that represent work across the federal government and are the basis for a high-quality, reliable cost estimate. A cost estimate created using best practices exhibits four broad characteristics: accurate, well-documented, credible, and comprehensive. In assessing program cost estimates for Guam, GAO cost estimation specialists evaluated the Marine Corps program office estimating methodologies, assumptions, and results to determine whether the official cost estimates were comprehensive, accurate, well-documented, and credible. As the basis of our assessment, we used our GAO Cost Estimating and Assessment Guide on estimating program schedules and costs, which was developed based on extensive research of cost estimating best practices. Our Cost Estimating and Assessment Guide considers an estimate to be accurate if it is not overly conservative, is based on an assessment of the most likely costs, and is adjusted properly for inflation; comprehensive if its level of detail ensures that all pertinent costs are included and no costs are double-counted or omitted; well-documented if the estimate can be easily repeated or updated and can be traced to original sources through auditing; and credible if the estimate has been cross-checked with an independent cost estimate and a level of uncertainty associated with the estimate has been identified and quantified. We also interviewed the Marine Corps program office’s cost estimating team to obtain a detailed understanding of the cost models provided, and met with Marine Corps headquarters, Marine Corps Forces Pacific Command, and Naval Facilities Engineering Command Pacific to understand their methodology, data, and approach in developing their independent cost estimate (if applicable). In doing so, we interviewed cognizant program officials, including the Program Manager and cost analysis team, regarding their respective roles, responsibilities, and actual efforts in developing and reviewing the cost estimate. In assessing program cost estimates for Hawaii and Australia, GAO cost estimation specialists conducted a limited assessment focused on the comprehensive characteristic because the estimates developed and provided by DOD are early in the program life cycle (e.g., they are Rough Order of Magnitude estimates), and as such the information is immature and inadequate to support a full analysis. Therefore, we chose to review only the comprehensive characteristic because, according to GAO’s Cost Estimating and Assessment Guide, if the cost estimate is not comprehensive then it cannot fully meet the well-documented, accurate, or credible best practice characteristics. For instance, if the cost estimate is missing some cost elements, then the documentation will be incomplete, the estimate will be inaccurate, and the result will not be credible due to the potential underestimating of costs and the lack of a full risk and uncertainty analysis. We conducted this performance audit from January 2016 to April 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Defense Policy Review Initiative (DPRI) is a bilateral force-posture realignment program between the U.S. and Japanese governments. Led by the U.S.-Japan Security Consultative Committee, DPRI consists of a package of 19 interrelated and interdependent initiatives for Japan and Guam with touch points to other areas in the U.S. Pacific Command area of responsibility, such as Tinian in the Commonwealth of the Northern Mariana Islands. According to the Department of Defense (DOD), implementation of 17 of the 19 DPRI initiatives is managed by subcommittees, panels, and working groups established and operating under the auspices of the U.S.-Japan Joint Committee. The other two initiatives—the Guam Master Plan and Missile Defense initiative—are managed on the U.S. side by the Joint Guam Program Office and the Missile Defense Agency, respectively. See table 4 for a list of the 19 initiatives and a short summary describing each effort. This appendix summarizes our assessment of the integrated master schedule for Guam compared to GAO’s Schedule Assessment Guide. We found the integrated master schedule is not reliable because it did not meet the characteristics of a reliable schedule identified in the guide. Specifically, there are 10 best practices associated with a reliable schedule that are summarized in 4 characteristics: comprehensive, well- constructed, credible, and controlled. For this analysis, we had five assessment categories: not met (provided no evidence that satisfies any of the criterion), minimally met (provided evidence that satisfies a small portion of the criterion), partially met (provided evidence that satisfies about half of the criterion), substantially met (provided evidence that satisfies a large portion of the criterion), and fully met (provided complete evidence that satisfies the entire criterion). We determined an assessment rating for each of the 10 best practices, and then determined an overall assessment rating for each characteristic based on the ratings for the best practices within each characteristic in table 5. A schedule is considered reliable if the overall assessment ratings for each of the four characteristics are substantially or fully met. GAO shared this analysis with Department of Defense officials. Table 5 only includes the reasons why best practices were not met, minimally met, or partially met. For this analysis, GAO cost estimating specialists assessed the realignment cost estimates for military construction and nonmilitary construction in Guam against the best practices for each of the four characteristics—comprehensive, well-documented, accurate, and credible—for reliable cost estimates, and also provided an overall assessment for each characteristic. This analysis has five assessment categories for the best practices and the characteristics: not met (provided no evidence that satisfies any of the criterion), minimally met (provided evidence that satisfies a small portion of the criterion), partially met (provided evidence that satisfies about half of the criterion), substantially met (provided evidence that satisfies a large portion of the criterion), and fully met (provided complete evidence that satisfies the entire criterion). A cost estimate is considered reliable if the overall assessment ratings for each of the four characteristics are fully or substantially met. Tables 6 and 7 include our detailed assessment of the Department of Defense’s (DOD) military construction and nonmilitary construction cost estimates for Guam, respectively, regarding each of the best practices for the four characteristics for reliable cost estimates. GAO shared this analysis with DOD officials. Tables 6 and 7 only include the reasons why best practices were not met, minimally met, or partially met. For this analysis, GAO cost estimation specialists assessed the realignment cost estimates for Hawaii and Australia against the best practices of the comprehensive characteristic for reliable cost estimates and provided an overall assessment for the characteristic. This analysis has five assessment categories for the characteristic: not met (provided no evidence that satisfies any of the criterion), minimally met (provided evidence that satisfies a small portion of the criterion), partially met (provided evidence that satisfies about half of the criterion), substantially met (provided evidence that satisfies a large portion of the criterion), and fully met (provided complete evidence that satisfies the entire criterion). A cost estimate is considered comprehensive if the assessment rating is fully or substantially met. Table 8, below, includes our detailed assessment of the Department of Defense’s (DOD) cost estimates for Hawaii and Australia, including the reasons that DOD’s cost estimates partially met GAO’s comprehensive characteristic. In addition to the contact named above, Laura Durland (Assistant Director), Emily Biskup, Scott Bruckner, Juana Collymore, Jennifer Echard, Jason Lee, Jennifer Leotta, Amie Lesser, Carol Petersen, Richard Powelson, Karen Richey, Jodie Sandel, Nancy Santucci, Michael Shaughnessy, Amber Sinclair, and Erik Wilkins-McKee made key contributions to this report. Defense Management: Further Analysis Needed to Identify Guam’s Public Infrastructure Requirements and Costs for DOD’s Realignment Plan. GAO-14-82. Washington, D.C.: December 17, 2013. Defense Management: More Reliable Cost Estimates and Further Planning Needed to Inform the Marine Corps Realignment Initiatives in the Pacific. GAO-13-360. Washington, D.C.: June 11, 2013. Force Structure: Improved Cost Information and Analysis Needed to Guide Overseas Military Posture Decisions. GAO-12-711. Washington, D.C.: June 6, 2012. Military Buildup on Guam: Costs and Challenges in Meeting Construction Timelines. GAO-11-459R. Washington, D.C.: June 27, 2011. Defense Management: Comprehensive Cost Information and Analysis of Alternatives Needed to Assess Military Posture in Asia. GAO-11-316. Washington, D.C.: May 25, 2011. Defense Infrastructure: The Navy Needs Better Documentation to Support Its Proposed Military Treatment Facilities on Guam. GAO-11-206. Washington, D.C.: April 5, 2011. Defense Infrastructure: Guam Needs Timely Information from DOD to Meet Challenges in Planning and Financing Off-Base Projects and Programs to Support a Larger Military Presence. GAO-10-90R. Washington, D.C.: November 13, 2009. Defense Infrastructure: DOD Needs to Provide Updated Labor Requirements to Help Guam Adequately Develop Its Labor Force for the Military Buildup. GAO-10-72. Washington, D.C.: October 14, 2009. Defense Infrastructure: Planning Challenges Could Increase Risks for DOD in Providing Utility Services When Needed to Support the Military Buildup on Guam. GAO-09-653. Washington, D.C.: June 30, 2009. Defense Infrastructure: High-Level Leadership Needed to Help Guam Address Challenges Caused by DOD-Related Growth. GAO-09-500R. Washington, D.C.: April 9, 2009. Defense Infrastructure: Opportunity to Improve the Timeliness of Future Overseas Planning Reports and Factors Affecting the Master Planning Effort for the Military Buildup on Guam. GAO-08-1005. Washington, D.C.: September 17, 2008. Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD-Related Growth. GAO-08-665. Washington, D.C.: June 17, 2008. Defense Infrastructure: Planning Efforts for the Proposed Military Buildup on Guam Are in Their Initial Stages, with Many Challenges Yet to Be Addressed. GAO-08-722T. Washington, D.C.: May 1, 2008. Defense Infrastructure: Overseas Master Plans Are Improving, but DOD Needs to Provide Congress Additional Information about the Military Buildup on Guam. GAO-07-1015. Washington, D.C.: September 12, 2007.
|
For two decades, DOD has planned to realign its presence in the Asia-Pacific region. The Marine Corps has plans to consolidate bases in Okinawa, relocating 4,100 Marines to Guam, 2,700 to Hawaii, 800 to the continental United States, and a rotational presence of 1,300 to Australia. The Joint Explanatory Statement accompanying the Consolidated Appropriations Act, 2016, included a provision that GAO study the realignment initiatives in the Asia-Pacific region. This report assesses the extent to which DOD has (1) coordinated its efforts and resolved selected identified capability deficiencies related to the relocation of Marines, (2) developed infrastructure plans and schedules and completed risk planning for its infrastructure that will support the relocation, and (3) developed reliable cost estimates for infrastructure for the relocation of Marines to Guam and Hawaii and the rotational presence in Australia. GAO reviewed relevant policies and plans; analyzed cost documents; interviewed DOD officials; and visited U.S. military installations in the Asia-Pacific region. The Department of Defense (DOD) has coordinated the relocation of Marines from Okinawa to other locations in the Asia-Pacific region through developing a synchronization plan and organizing working groups. However, DOD has not resolved selected identified capability deficiencies related to the relocation of Marine units; training needs in the region; the reduction in runway length at the Futenma Replacement Facility in Okinawa; and challenges for operating in Australia. DOD guidance indicates that mission requirements—which would include the capabilities needed to fulfill the mission—largely determine land and facility support requirements. If DOD does not resolve the selected identified capability deficiencies in its infrastructure plans, DOD may be unable to maintain its capabilities or face much higher costs to do so. DOD has taken steps to develop infrastructure plans and schedules for its relocation efforts, but it did not develop a reliable schedule for the Marine relocation to Guam and has not completed its risk planning for infrastructure in Guam. DOD developed plans that will support construction efforts in Guam and Japan, and developed some initial infrastructure plans for Hawaii and Australia. However, GAO found the Marines Corps' integrated master schedule for Guam did not fully meet the comprehensive, well-constructed, and credible characteristics for a reliable schedule. For example, the schedule does not include resources needed for nonconstruction activities, such as information technology and design activities. Additionally, the Marine Corps has not completed its risk-management plan for infrastructure construction in Guam. Specifically, the Marine Corps has not identified its strategy to address construction risks including labor shortages and endangered-species protection. If DOD does not have a reliable schedule or has not completed risk planning for Guam, it may not have complete information to identify and address risks that may result in cost overruns and schedule delays. DOD has made progress in developing cost estimates for Guam, but its estimates partially met GAO best practices for reliable cost estimates for the relocations to Guam and Hawaii and the establishment of a rotational presence in Australia. For cost estimates related to Guam military construction activities, DOD included ground rules and assumptions, but did not include some elements of a reliable cost estimate, such as a risk analysis. Additionally, DOD developed cost estimates for nonmilitary construction activities that provide a high-level planning overview of the requirements, but they did not incorporate several other best practices, including a unifying Work Breakdown Structure that defines in detail the work necessary to accomplish a program's objectives. For Hawaii and Australia, the cost estimates are not considered reliable because they did not include all life-cycle costs or a Work Breakdown Structure. If DOD does not revise the cost estimates for these locations, decision makers in DOD and Congress will not have reliable cost information to inform funding decisions and to help them determine the viability of relocation of Marines to Hawaii and the establishment of a rotational presence in Australia. GAO recommends that DOD resolve capability deficiencies in the four selected identified areas, update its schedule for Guam infrastructure, complete a risk-management plan for Guam infrastructure, and revise its three cost estimates. DOD concurred with two recommendations, partially concurred with six, and did not concur with one. GAO continues to believe its recommendations are valid, as discussed in this report.
|
The nation’s economy and security are heavily dependent on oil, natural gas, and other energy commodities. Nearly half of the nation’s oil is transported from overseas by tankers. For example, about 49 percent of the nation’s crude oil supply—one of the main sources of gasoline, jet fuel, heating oil, and many other petroleum products—was transported by tanker into the United States in 2009. The remaining oil and natural gas used in the United States comes from Canada by pipeline or is produced from domestic sources in areas such as offshore facilities in the Gulf of Mexico. With regard to these domestic sources, the area of federal jurisdiction—called the Outer Continental Shelf (OCS)—contains an estimated 85 million barrels of oil, more than all onshore resources and those in shallower state waters combined. In addition, the Louisiana Offshore Oil Port (LOOP), a deepwater port, is responsible for transporting about 10 percent of imported oil into the United States. As the lead federal agency for maritime security, the Coast Guard seeks to mitigate many kinds of security challenges in the maritime environment. Doing so is a key part of its overall security mission and a starting point for identifying security gaps and taking actions to address them. Carrying out these responsibilities is a difficult and challenging task because energy tankers often depart from foreign ports and are registered in countries other than the United States, which means the United States has limited authority to oversee the security of such vessels until they enter U.S. waters. Offshore energy infrastructure also presents its own set of security challenges because some of this infrastructure is located many miles from shore. The FBI shares responsibility with the Coast Guard for preventing and responding to terrorist incidents in the maritime environment, including incidents involving energy tankers. Energy tankers face risks from various types of attack. We identified three primary types of attack methods against energy tankers in our 2007 report, including suicide attacks, armed assaults by terrorists or armed bands, and launching a “standoff” missile attack using a rocket or some other weapon fired from a distance. In recent years, we have issued reports that discussed risks energy tankers face from terrorist attacks and attacks from other criminals, such as pirates. Terrorists have attempted— and in some cases carried out—attacks on energy tankers since September 11, 2001. To date, these attacks have included attempts to damage tankers or their related infrastructure at overseas ports. For example, in 2002, terrorists conducted a suicide boat attack against the French supertanker Limburg off the coast of Yemen, and in 2010, an incident involving another supertanker, the M/V M. Star, in the Strait of Hormuz is suspected to have been a terrorist attack. Our work on energy tankers identified three main places in which tankers may be at risk of an attack: (1) at foreign ports; (2) in transit, especially at narrow channels, or chokepoints; and (3) at U.S. ports. For example, foreign ports, where commodities are loaded onto tankers, may vary in their levels of security, and the Coast Guard is limited in the degree to which it can bring about improvements abroad when security is substandard, in part because its activities are limited by conditions set by host nations. In addition, while tankers are in transit, they face risks because they travel on direct routes that are known in advance and, for part of their journey, they may have to travel through waters that do not allow them to maneuver away from possible attacks. According to the Energy Information Administration, chokepoints along a route make tankers susceptible to attacks. Further, tankers remain at risk upon arrival in the United States because of the inherent risks to port facilities. For example, port facilities are generally accessible by land and sea and are sprawling installations often close to population centers. Beyond the relatively rare threat of terrorist attacks against tankers, the threat of piracy has become relatively common. In particular, piracy threatens tankers transiting one of the world’s busiest shipping lanes near key energy corridors and the route through the Suez Canal. The vast areas at risk for piracy off the Horn of Africa, combined with the small number of military ships available for patrolling them, make protecting energy tankers difficult. According to the International Maritime Bureau, 30 percent (490 of 1,650) of vessels reporting pirate attacks worldwide from 2006 through 2010 were identified as tankers. See table 1 for a summary of tankers attacked by pirates during 2006 through 2010. As shown in the table, pirate attacks against tankers have tripled in the last 5 years, and the incidence of piracy against tankers continues to rise. From January through June 2011, 100 tankers were attacked, an increase of 37 percent compared to tankers attacked from January through June 2010. Figure 1 shows one of the recent suspected pirate attacks. In addition, tankers are fetching increasing ransom demands from Somali pirates. Media reports indicate a steady increase in ransoms for tankers, from $3 million in January 2009 for the Saudi tanker Sirius Star, to $9.5 million in November 2010 for the South Korean tanker Samho Dream, to $12 million in June 2011 for the Kuwaiti tanker MV Zirku. The U.S. Maritime Administration and the Coast Guard have issued guidance for commercial vessels to stay 200 miles away from the Somali coast. However, pirates have adapted and increased their capability to attack and hijack vessels to more than 1,000 miles from Somalia using mother ships, from which they launch smaller boats to conduct the attacks. To address the growing concern over piracy, the Coast Guard has issued a directive with guidelines for U.S. vessels operating in high- risk waters. This directive provides vessel owners and operators with direction for responding to emerging security risks. Offshore energy infrastructure also faces risks from various types of attacks. For example, in 2004, a terrorist attacked an offshore oil terminal in Iraq using speedboats packed with explosives, killing two U.S. Navy sailors and a U.S. Coast Guardsman. Potential attack methods against offshore energy infrastructure identified by the Coast Guard or owners and operators include crashing an aircraft into it; using a submarine vessel, diver, or other means of attacking it underwater; ramming it with a vessel; and sabotage by an employee. Offshore energy infrastructure may face security risks because this infrastructure is located in open waters and generally many miles away from Coast Guard assets and personnel. In addition to our work on energy tankers, we have recently completed work involving Coast Guard efforts to assess security risks and ensure the security of offshore energy infrastructure. Specifically, our work focused on two main types of offshore energy infrastructure that the Coast Guard oversees for security. The first type are facilities that operate on the OCS and are generally described as facilities temporarily or permanently attached to the subsoil or seabed of the OCS that engage in exploration, development, or production of oil, natural gas, or mineral resources. As of September 2010, there were about 3,900 such facilities, and if a facility of this type meets or exceeds any one of three thresholds for production or personnel, it is subject to 33 C.F.R. part 106 security requirements. In this testimony, we focus on the 50 facilities that, in 2011, are regulated for security because they meet or exceed the threshold criteria. We refer to these security-regulated facilities as OCS facilities. The second type of offshore energy infrastructure are deepwater ports, which are fixed or floating manmade structures used or intended for use as a port or terminal for the transportation, storage, or handling of oil or natural gas to any state and includes the transportation of oil or natural gas from the United States’ OCS. There are currently four licensed deepwater ports—two in the Gulf of Mexico and two in Massachusetts Bay. Unlike OCS facilities, which are involved in the production of oil or natural gas, deepwater ports enable tankers to offload oil or liquefied natural gas for transport to land by underwater pipelines. In 2007, we assessed Coast Guard and FBI efforts to ensure the security of energy tankers and respond to terrorist incidents involving energy tankers. We found that actions were being taken, internationally and domestically, to protect tankers and port facilities at which tankers would be present. For example, the Coast Guard visits foreign exporting ports to assess the effectiveness of the anti-terrorism measures in place. Additionally, port stakeholders in the United States have taken steps to address vulnerabilities at domestic ports. For example, the Houston Ship Channel Security District is a public-private partnership that was established to increase preparedness and response capabilities with the goal of improving security and safety for facilities, employees, and communities surrounding the Houston Ship Channel. The security district has installed technology, such as night vision and motion-activated detection equipment, and conducts patrols on land and in the water. However, we also reported on challenges that remained in (1) making federal agencies’ protective actions more effective and (2) implementing plans for a response to an attack, if a terrorist attack were to succeed despite the protective measures in place. We made five recommendations in our 2007 report, three of which were directed to the Secretary of Homeland Security and two of which were directed jointly to the Secretary of Homeland Security and the Attorney General. The departments concurred or partially concurred with all of the recommendations. The Coast Guard and the FBI have made progress in implementing these recommendations—two have been implemented, and the Coast Guard is in the process of implementing a third—but actions have not yet been taken to address the remaining two recommendations. See table 2 for a summary of our findings, recommendations, and the current status of agency efforts to implement our recommendations. Regarding our recommendation that the Coast Guard and the FBI coordinate to help ensure that a detailed operational plan be developed that integrates the different spill and terrorism sections of the National Response Framework, DHS is in the process of revising this document and did not have further information regarding whether or how the spill and terrorism response annexes may be revised. Further, the FBI has not taken independent action to implement this recommendation, in part because it did not concur with the need to develop a separate operational plan. In the event of a successful attack on an energy tanker, ports would need to provide an effective, integrated response to (1) protect public safety and the environment, (2) conduct an investigation, and (3) restore shipping operations in a timely manner. Consequently, clearly defined and understood roles and responsibilities for all essential stakeholders are needed to ensure an effective response, and operational plans for the response should be explicitly linked. Regarding our recommendation that DHS develop performance measures for emergency response capabilities, DHS has begun to revise its grant programs, but it is too early in that process to determine whether and how performance measures will be incorporated into those revisions. Performance measures would allow DHS to set priorities for funding on the basis of reducing overall risk, thereby helping ports obtain resources necessary to respond. We continue to believe that the recommendations not yet addressed have merit and should be fully implemented. In accordance with federal statutes and presidential directives, the Coast Guard assesses security risks as part of its responsibilities for ensuring the security of OCS facilities and deepwater ports. In doing so, the Coast Guard, among other things, uses a tool called the Maritime Security Risk Analysis Model (MSRAM). Coast Guard units throughout the country use this tool to assess security risks to about 28,000 key infrastructure in and around the nation’s ports and waterways. For example, MSRAM examines security risks to national monuments, bridges, and oil and gas terminals. The Coast Guard’s efforts to assess security risks to OCS facilities and deepwater ports are part of a broader effort by DHS to protect critical infrastructure and key resources. To further guide this effort, in 2009 DHS issued an updated version of the 2006 National Infrastructure Protection Plan which describes the department’s strategic approach to infrastructure protection. The plan placed an increased emphasis on risk management and it centered attention on going beyond assessments of individual assets by extending the scope of risk assessments to systems or networks. For example, while the 2006 plan focused on assessing the vulnerability of facilities, the 2009 plan discussed efforts to conduct systemwide vulnerability assessments. The Coast Guard has taken a number of actions in assessing security risks to OCS facilities and deepwater ports. The Coast Guard has used MSRAM to, among other things, examine security risks to OCS facilities and deepwater ports by assessing three main factors—threats, vulnerabilities, and consequences. First, Coast Guard analysts use MSRAM to assess security risks against such energy infrastructure by examining potential scenarios terrorists may use to attack OCS facilities or deepwater ports. For example, MSRAM assesses attack scenarios, such as an attack by a hijacked vessel, a small boat attack, sabotage, or an attack by a swimmer or diver. Second, the analysts use MSRAM to evaluate vulnerabilities of OCS facilities and deepwater ports by examining the probability of a successful attack by assessing factors such as the ability of key stakeholders, including the owner, operator, or law enforcement, to interdict an attack and the ability of a target to withstand an attack. Third, the analysts use MSRAM to evaluate potential consequences of an attack, such as deaths or injuries and economic and environmental impacts. MSRAM’s output produces a risk index number for each maritime target—such as an OCS facility or deepwater port—that allows Coast Guard officials at the local, regional, and national levels to compare and rank critical infrastructure for the purpose of informing security decisions. According to Coast Guard officials, based on MSRAM’s output, which is a relative risk ranking, OCS facilities are not considered to be high-risk targets. To inform analysts’ inputs into MSRAM, the Coast Guard has coordinated efforts with the intelligence community and key stakeholders. For example, the Coast Guard’s Intelligence Coordination Center inputs threat assessment data into MSRAM. Coast Guard analysts also use information from other stakeholders, such as reports produced by the Department of the Interior’s Bureau of Ocean Energy Management, Regulation and Enforcement (BOEMRE), which contain oil and gas production data, to inform their evaluations of vulnerabilities and consequences. Based on the assessments of threats, vulnerabilities, and consequences, MSRAM produces a risk index number for each OCS facility and deepwater port. The Coast Guard has also taken actions to supplement MSRAM by, among other things, (1) including new data fields on the frequency with which tankers visit a port and (2) adding additional threat scenarios, such as a threat involving a cyber attack, to its data set. While MSRAM has been applied to deepwater ports, Coast Guard officials have also used an independent risk assessment to assess security risks as part of the application process for recently constructed deepwater ports. For example, in December 2006, as part of the application process for a proposed deepwater port in the Massachusetts Bay, the Coast Guard, the owner and operator, and other stakeholders collectively identified and assessed threat scenarios as well as the potential consequences and vulnerabilities of each scenario. Based on this assessment, stakeholders identified and agreed to carry out security measures to mitigate the risks, such as installing camera systems and increasing radar coverage. The Coast Guard faces complex and technical challenges in assessing security risks. The Coast Guard recognizes these challenges and generally has actions underway to study or address them. Coast Guard officials noted that some of these challenges are not unique to the Coast Guard’s risk assessment model and that these challenges are faced by others in the homeland security community involved in conducting risk assessments. Specific challenges are detailed below. Vulnerability-related data: The Coast Guard does not have data on the ability of an OCS facility to withstand an attack, which is defined in MSRAM as target hardness. The Coast Guard recognizes that target hardness is an important consideration in assessing the vulnerability of OCS facilities. However, MSRAM analysts described challenges in assessing target hardness because empirical data are not available or research has not been conducted to do so. For example, research on whether a hijacked boat or an underwater attack could sink an offshore oil or natural gas platform would give the Coast Guard and owners and operators a clearer sense of whether this attack scenario could result in major consequences. Coast Guard officials and corporate security officers with whom we spoke indicated that such research would advance knowledge about the vulnerabilities of OCS facilities and deepwater ports. Gaining a better understanding of target hardness of these and other threat scenarios could improve the quality of the output from MSRAM. According to Coast Guard’s MSRAM Program Manager, the Coast Guard may recommend conducting more research on the vulnerability to and consequences of attack scenarios as a result of a study it is currently conducting on OCS facilities in the Gulf of Mexico. The Coast Guard initiated this study in the fall of 2010 after the Deepwater Horizon incident. The study initially reviewed the “lessons learned” from Deepwater Horizon and how those lessons could be used to improve MSRAM. During the course of our review, Coast Guard officials stated that the scope of the study has been expanded to include OCS facilities and that the Coast Guard expects to issue its report in the fall of 2011. Consequences-related data: The input for secondary economic impacts can have a substantial effect on how MSRAM’s output ranks a facility relative to other potential targets. Undervaluing secondary economic impacts could result in a lower relative risk ranking that underestimates the security risk to a facility, or inversely, overvaluing secondary economic impacts could result in overestimating the security risk to a facility. However, the Coast Guard has limited data for assessing secondary economic impacts from an attack on OCS facilities or deepwater ports. Coast Guard analysts stated that gathering these data is a challenge because there are few models or guidance available for doing so. During the course of our review, the Coast Guard started using a tool, called “IMPLAN,” that helps inform judgments of secondary economic impacts by showing what the impact could be for different terrorist scenarios. The tool, however, has limits in that it should not be used where the consequences of a terrorist attack are mainly interruption to land or water transportation. Enhancing DHS’s and the Coast Guard’s ability to assess secondary economic impacts could improve a MSRAM analyst’s accuracy in assessing the relative risk of a particular target. Coast Guard officials added that they are working with DHS’s Office of Risk Management and Analysis in studying ways to improve how it assesses secondary economic impacts. Challenges in assessing security risks to OCS facilities: We determined that the Coast Guard did not conduct MSRAM assessments for all 50 of the OCS facilities that are subject to federal security requirements in 2011. Coast Guard guidance calls for MSRAM analysts to identify and assess all significant targets that fall within a unit’s area of responsibility, which includes all security- regulated OCS facilities. Specifically, as of May 2011, we found that MSRAM did not include 12 of the 50 OCS facilities operating at that time. Coast Guard officials generally agreed with our finding and they have since incorporated these 12 facilities into MSRAM and completed the required risk assessments. While the Coast Guard plans to update its policies and procedures for inspecting and ensuring the security of OCS facilities in the future, the current set of policies and procedures do not call for an updated list of OCS facilities to be provided to MSRAM analysts to assess the security risks to such facilities annually. Coast Guard officials acknowledged that their policies and procedures did not include this requirement. Revising policies and procedures to include such a requirement is important in that the number of OCS facilities could change each year. For example, some facilities may drop below the production or personnel thresholds described earlier in this statement, thereby falling outside the scope of 33 C.F.R. part 106, or other facilities could meet or exceed such thresholds, thereby rendering them subject to part 106. Standards for Internal Control in the Federal Government state that policies and procedures enforce management directives and help ensure that actions are taken to address risks. In addition, internal control standards state that such control activities are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and for achieving effective results. Developing such procedures could help ensure that the Coast Guard carries out its risk assessment requirements for such security- regulated OCS facilities. Challenges in assessing security risks to offshore energy infrastructure that is not subject to security requirements: With respect to OCS facilities, analysts only use MSRAM to assess security risks associated with those OCS facilities that are regulated for security under 33 C.F.R. part 106. For example, the Deepwater Horizon did not meet the threshold criteria subjecting it to regulation under part 106, and therefore, MSRAM was not used to assess its security risks (see fig. 2 for a photo of the Deepwater Horizon explosion). According to Coast Guard officials, mobile offshore drilling units (MODUs), such as the Deepwater Horizon, do not generally pose a risk of a terrorist attack since there is little chance of an oil spill when these units are drilling and have not struck oil. However, the officials noted that there is a brief period of time when a drilling unit strikes a well, but the well has yet to be sealed prior to connecting it to a production facility. The Deepwater Horizon was in this stage when it resulted in such a large oil spill. During that period of time, MODUs could be at risk of a terrorist attack that could have significant consequences despite a facility not meeting the production or personnel thresholds. For example, such risks could involve the reliability of blowout preventer valves—specialized valves that prevent a well from spewing oil in the case of a blowout. Gaining a fuller understanding of the security risks associated with MODUs, such as the Deepwater Horizon, could improve the quality of program decisions made by Coast Guard managers on whether actions may be needed to ensure the security of this type of facility. According to Coast Guard officials, they are studying the “lessons learned” from the Deepwater Horizon incident and part of the study involves examining whether analysts should use MSRAM to assess MODUs in the future. Challenges in assessing systemic or network risks: MSRAM does not assess systemic or network risks because, according to Coast Guard officials, these types of assessments are beyond the intended use of MSRAM. The 2009 National Infrastructure Protection Plan, 2010 DHS Quadrennial Review, and a National Research Council evaluation of DHS risk assessment efforts have determined that gaining a better understanding of network risks would help to understand multiplying consequences of a terrorist attack or simultaneous attacks on key facilities. Understanding “network” risks involves gaining a greater understanding of how a network is vulnerable to a diverse range of threats. Examining how such vulnerabilities create strategic opportunities for intelligent adversaries with malevolent intent is central to this understanding. For example, knowing what damage a malicious adversary could achieve by exploiting weaknesses in an oil-distribution network offers opportunities for improving the resiliency of the network within a given budget. How the Coast Guard assesses offshore infrastructure within the broader set of networks is important. The findings of the National Commission on the BP Deepwater Horizon Oil Spill incident illustrate how examining networks or systems from a safety or engineering perspective can bring greater knowledge of how single facilities intersect with broader systems. The report noted that “complex systems almost always fail in complex ways” and cautioned that attempting to identify a single cause for the Deepwater Horizon incident would provide a dangerously incomplete picture of what happened. As a result, the report examined the Deepwater Horizon incident with an expansive view toward the role that industry and government sectors played in assessing vulnerabilities and the impact the incident had on economic, social, and environmental systems. Enhancing knowledge about the vulnerabilities of networks or systems with which OCS facilities and deepwater ports intersect could improve the quality of information that informs program and budget decisions on how to best ensure security and use scarce resources in a constrained fiscal environment. Doing so would also be consistent with DHS’s Quadrennial Review and other DHS guidance and would provide information to decision makers that could minimize the likelihood of being unprepared for a potential attack. Coast Guard officials agreed that assessing “network effects” is a challenge and they are examining ways to meet this challenge. However, the Coast Guard’s work is this area is in its infancy and there is uncertainty regarding the way in which the Coast Guard will move forward in measuring “network effects.” The threat of terrorism against energy tankers and offshore energy infrastructure highlights the importance of the Coast Guard having policies and procedures in place to better ensure the security of energy tankers, OCS facilities, and deepwater ports. The Coast Guard has taken steps to implement prior GAO recommendations to enhance energy tanker security, and it continues to work towards implementing the three outstanding recommendations. Improvements in security could help to prevent a terrorist attack against this infrastructure, which could have significant consequences, such as those resulting from the Deepwater Horizon incident. While the Coast Guard does not consider OCS facilities that it has assessed in MSRAM to be high risk, it is important to assess all OCS facilities as required by Coast Guard guidance. Since May 2011, when we determined that some OCS facilities were not assessed, the Coast Guard has completed its assessments for the previously omitted facilities. However, given that the list of security-regulated facilities may change each year based on factors such as production volume, it is important to ensure that any facilities added to the list in the future will be assessed for security risks in MSRAM. By revising policies and procedures to help ensure that an updated list of OCS facilities is provided to MSRAM analysts on an annual basis, the Coast Guard would be better positioned to ensure that all risk assessments for facilities requiring such assessments be conducted in a manner consistent with the law and presidential directive. To strengthen the Coast Guard’s efforts to assess security risks and ensure the security of OCS facilities, we recommend that the Commandant of the Coast Guard revise policies and procedures to ensure that MSRAM analysts receive the annual updated list of security- regulated OCS facilities to ensure that risk assessments have been conducted on all such OCS facilities. We provided a draft of this testimony to DHS and DOJ for comment. The Coast Guard concurred with our recommendation to revise policies and procedures to ensure that MSRAM analysts receive the annual updated list of security-regulated OCS facilities. DHS and DOJ provided oral and technical comments, which we incorporated as appropriate. Chairman McCaul, Ranking Member Keating, and Members of the Subcommittee, this concludes my prepared statement. This testimony concludes our work on Coast Guard efforts to assess security risks for offshore energy infrastructure. However, we will continue our broader work looking at the security of offshore energy infrastructure, including Coast Guard security inspections and other challenges. Our evaluation will focus on Coast Guard security inspections and other measures to better secure OCS facilities and deepwater ports. We will continue to work with the Coast Guard to develop solutions to ensure that inspections of OCS facilities are completed as required. I would be happy to respond to any questions you may have. Key contributors to this testimony were Christopher Conrad, Assistant Director; Neil Asaba, Analyst-in-Charge; Alana Finley; Christine Kehr; Colleen McEnearney; Erin O’Brien; Jodie Sandel; and Suzanne Wren. Chuck Bausell contributed economics expertise, Pamela Davidson assisted with design and methodology, Tom Lombardi provided legal support, and Jessica Orr provided assistance in testimony preparation. Maritime Security: Updating U.S. Counterpiracy Action Plan Gains Urgency as Piracy Escalates off the Horn of Africa. GAO-11-449T. Washington, D.C.: March 15, 2011. Quadrennial Homeland Security Review: 2010 Reports Addressed Many Required Elements, but Budget Planning Not Yet Completed. GAO-11-153R. Washington, D.C.: December 16, 2010. Maritime Security: DHS Progress and Challenges in Key Areas of Port Security. GAO-10-940T. Washington, D.C.: July 21, 2010. Maritime Security: Actions Needed to Assess and Update Plan And Enhance Collaboration among Partners Involved in Countering Piracy off the Horn of Africa. GAO-10-856. Washington, D.C.: September 24, 2010. Critical Infrastructure Protection: Update to National Infrastructure Protection Plan Includes Increased Emphasis on Risk Management and Resilience. GAO-10-296. Washington, D.C.: March 5, 2010. Maritime Security: Federal Efforts Needed to Address Challenges in Preventing and Responding to Terrorist Attacks on Energy Commodity Tankers. GAO-08-141. Washington, D.C.: December 10, 2007. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The nation's economy and security are heavily dependent on oil, natural gas, and other energy commodities. Al-Qa'ida and other groups with malevolent intent have targeted energy tankers and offshore energy infrastructure because of their importance to the nation's economy and national security. The U.S. Coast Guard--a component of the Department of Homeland Security (DHS)--is the lead federal agency for maritime security, including the security of energy tankers and offshore energy infrastructure. The Federal Bureau of Investigation (FBI) also has responsibilities for preventing and responding to terrorist incidents. This testimony discusses the extent to which (1) the Coast Guard and the FBI have taken actions to address GAO's prior recommendations to prevent and respond to a terrorist incident involving energy tankers and (2) the Coast Guard has taken actions to assess the security risks to offshore energy infrastructure and related challenges. This testimony is based on products issued from December 2007 through March 2011 and recently completed work on the Coast Guard's actions to assess security risks. GAO reviewed documents from the Coast Guard's risk model and relevant laws, regulations, policies, and procedures; and interviewed Coast Guard officials. The Coast Guard and the FBI have made progress implementing prior recommendations GAO made to enhance energy tanker security. In 2007, GAO made five recommendations to address challenges in ensuring the effectiveness of federal agencies' actions to protect energy tankers and implement response plans. The Coast Guard and the FBI have implemented two recommendations, specifically: (1) the Coast Guard, in coordination with U.S. Customs and Border Protection, developed protocols for facilitating the recovery and resumption of trade following a disruption to the maritime transportation system, and (2) the Coast Guard and the FBI participated in local port exercises that executed multiple response plans simultaneously. The Coast Guard has made progress on a third recommendation through work on a national strategy for the security of certain dangerous cargoes. It also plans to develop a resource allocation plan, starting in April 2012, which may help address the need to balance security responsibilities. However, the Coast Guard and the FBI have not yet taken action on a fourth recommendation to develop an operational plan to integrate the national spill and terrorism response plans. According to DHS, it plans to revise the National Response Framework, but no decision has been made regarding whether the separate response plans will be integrated. Also, DHS has not yet taken action on the final recommendation to develop explicit performance measures for emergency response capabilities and use them in risk-based analyses to set priorities for acquiring needed response resources. According to DHS, it is revising its emergency response grant programs, but does not have specific plans to develop performance measures as part of this effort. The Coast Guard has taken actions to assess the security risks to offshore energy infrastructure, which includes Outer Continental Shelf (OCS) facilities (facilities that are involved in producing oil or natural gas) and deepwater ports (facilities used to transfer oil and natural gas from tankers to shore), but improvements are needed. The Coast Guard has used its Maritime Security Risk Analysis Model (MSRAM) to examine the security risks to OCS facilities and deepwater ports. To do so, the Coast Guard has coordinated with the intelligence community and stakeholders, such as the Department of the Interior's Bureau of Ocean Energy Management, Regulation and Enforcement. However, the Coast Guard faces complex and technical challenges in assessing risks. For example, the Coast Guard does not have data on the ability of an OCS facility to withstand an attack. The Coast Guard generally recognizes these challenges and has actions underway to study or address them. Further, GAO determined that as of May 2011, the Coast Guard had not assessed security risks for 12 of the 50 security-regulated OCS facilities that are to be subjected to such assessments. Coast Guard officials later determined that they needed to add these OCS facilities to MSRAM for assessment and have completed the required assessments. However, while the list of security-regulated facilities may change each year based on factors such as production volume, the Coast Guard's current policies and procedures do not call for Coast Guard officials to provide an annual updated list of regulated OCS facilities to MSRAM analysts. Given the continuing threat to such offshore facilities, revising its procedures could help ensure that the Coast Guard carries out its risk assessment requirements for security-regulated OCS facilities. GAO is recommending that the Coast Guard revise policies and procedures to ensure its analysts receive the annual updated list of regulated offshore energy facilities to ensure risk assessments are conducted on those facilities. The Coast Guard concurred with this recommendation.
|
As computer technology has advanced, federal agencies have become dependent on computerized information systems to carry out their operations and to process, maintain, and report essential information. Virtually all federal operations are supported by automated systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions, deliver services to the public, and account for their resources without these information assets. Information security is thus especially important for federal agencies to ensure the confidentiality, integrity, and availability of their information and information systems. Conversely, ineffective information security controls can result in significant risk to a broad array of government operations and assets. For example: ● Resources, such as federal payments and collections, could be lost or stolen. ● Computer resources could be used for unauthorized purposes or to launch attacks on other computer systems. ● Sensitive information, such as taxpayer data, Social Security records, medical records, intellectual property, and proprietary business information, could be inappropriately disclosed, browsed, or copied for purposes of identity theft, espionage, or other types of crime. ● Critical operations, such as those supporting critical infrastructure, national defense, and emergency services, could be disrupted. ● Data could be added, modified, or deleted for purposes of fraud, subterfuge, or disruption. ● Agency missions could be undermined by embarrassing incidents that result in diminished confidence in the ability of federal organizations to conduct operations and fulfill their responsibilities. Cyber threats to federal information systems and cyber-based critical infrastructures are evolving and growing. In September 2007, we reported that these threats can be unintentional and intentional, targeted or nontargeted, and can come from a variety of sources. Unintentional threats can be caused by inattentive or untrained employees, software upgrades, maintenance procedures, and equipment failures that inadvertently disrupt systems or corrupt data. Intentional threats include both targeted and nontargeted attacks. A targeted attack is when a group or individual attacks a specific system or cyber-based critical infrastructure. A nontargeted attack occurs when the intended target of the attack is uncertain, such as when a virus, worm, or other malicious software is released on the Internet with no specific target. Government officials are concerned about attacks from individuals and groups with malicious intent, such as criminals, terrorists, and adversarial foreign nations. For example, in February 2009, the Director of National Intelligence testified that foreign nations and criminals have targeted government and private sector networks to gain a competitive advantage and potentially disrupt or destroy them, and that terrorist groups have expressed a desire to use cyber attacks as a means to target the United States. The Federal Bureau of Investigation has identified multiple sources of threats to our nation’s critical information systems, including foreign nations engaged in espionage and information warfare, domestic criminals, hackers, virus writers, and disgruntled employees and contractors working within an organization. Table 1 summarizes those groups or individuals that are considered to be key sources of cyber threats to our nation’s information systems and cyber infrastructures. These groups and individuals have a variety of attack techniques at their disposal. Furthermore, as we have previously reported, the techniques have characteristics that can vastly enhance the reach and impact of their actions, such as the following: ● Attackers do not need to be physically close to their targets to perpetrate a cyber attack. ● Technology allows actions to easily cross multiple state and national borders. ● Attacks can be carried out automatically, at high speed, and by attacking a vast number of victims at the same time. ● Attackers can more easily remain anonymous. Table 2 identifies the types and techniques of cyber attacks that are commonly used. Government officials are increasingly concerned about the potential for a cyber attack. According to the Director of National Intelligence, the growing connectivity between information systems, the Internet, and other infrastructures creates opportunities for attackers to disrupt telecommunications, electrical power, and other critical infrastructures. As government, private sector, and personal activities continue to move to networked operations, as digital systems add ever more capabilities, as wireless systems become more ubiquitous, and as the design, manufacture, and service of IT have moved overseas, the threat will continue to grow. Over the past year, cyber exploitation activity has grown more sophisticated, more targeted, and more serious. For example, the Director of National Intelligence also stated that, in August 2008, the Georgian national government’s Web sites were disabled during hostilities with Russia, which hindered the government’s ability to communicate its perspective about the conflict. The director expects disruptive cyber activities to become the norm in future political and military conflicts. Perhaps reflective of the evolving and growing nature of the threats to federal systems, agencies are reporting an increasing number of security incidents. These incidents put sensitive information at risk. Personally identifiable information about Americans has been lost, stolen, or improperly disclosed, thereby potentially exposing those individuals to loss of privacy, identity theft, and financial crimes. Reported attacks and unintentional incidents involving critical infrastructure systems demonstrate that a serious attack could be devastating. Agencies have experienced a wide range of incidents involving data loss or theft, computer intrusions, and privacy breaches, underscoring the need for improved security practices. When incidents occur, agencies are to notify the federal information security incident center—US-CERT. As shown in figure 1, the number of incidents reported by federal agencies to US-CERT has increased dramatically over the past 3 years, increasing from 5,503 incidents reported in fiscal year 2006 to 16,843 incidents in fiscal year 2008 (about a 206 percent increase). Incidents are categorized by US-CERT in the following manner: ● Unauthorized access: In this category, an individual gains logical or physical access without permission to a federal agency’s network, system, application, data, or other resource. ● Denial of service: An attack that successfully prevents or impairs the normal authorized functionality of networks, systems, or applications by exhausting resources. This activity includes being the victim or participating in a denial of service attack. ● Malicious code: Successful installation of malicious software (e.g., virus, worm, Trojan horse, or other code-based malicious entity) that infects an operating system or application. Agencies are not required to report malicious logic that has been successfully quarantined by antivirus software. ● Improper usage: A person violates acceptable computing use policies. ● Scans/probes/attempted access: This category includes any activity that seeks to access or identify a federal agency computer, open ports, protocols, service, or any combination of these for later exploit. This activity does not directly result in a compromise or denial of service. ● Investigation: Unconfirmed incidents that are potentially malicious or anomalous activity deemed by the reporting entity to warrant further review. As noted in figure 2, the three most prevalent types of incidents reported to US-CERT during fiscal years 2006 through 2008 were unauthorized access, improper usage, and investigation. The growing threats and increasing number of reported incidents, highlight the need for effective information security policies and practices. However, serious and widespread information security control deficiencies continue to place federal assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. In their fiscal year 2008 performance and accountability reports, 20 of 24 major agencies indicated that inadequate information system controls over financial systems and information were either a significant deficiency or a material weakness for financial statement reporting (see fig. 3). Similarly, our audits have identified control deficiencies in both financial and nonfinancial systems, including vulnerabilities in critical federal systems. For example: ● We reported in September 2008 that although the Los Alamos National Laboratory (LANL)—one of the nation’s weapons laboratories—implemented measures to enhance the information security of its unclassified network, vulnerabilities continued to exist in several critical areas, including (1) identifying and authenticating users of the network, (2) encrypting sensitive information, (3) monitoring and auditing compliance with security policies, (4) controlling and documenting changes to a computer system’s hardware and software, and (5) restricting physical access to computing resources. As a result, sensitive information on the network—including unclassified controlled nuclear information, naval nuclear propulsion information, export control information, and personally identifiable information—were exposed to an unnecessary risk of compromise. Moreover, the risk was heightened because about 300 (or 44 percent) of 688 foreign nationals who had access to the unclassified network as of May 2008 were from countries classified as sensitive by the Department of Energy, such as China, India, and Russia. ● In May 2008 we reported that the Tennessee Valley Authority (TVA)— a federal corporation and the nation’s largest public power company that generates and transmits electricity using its 52 fossil, hydro, and nuclear power plants and transmission facilities—had not fully implemented appropriate security practices to secure the control systems used to operate its critical infrastructures. Both its corporate network infrastructure and control systems networks and devices at individual facilities and plants were vulnerable to disruption. In addition, the interconnections between TVA’s control system networks and its corporate network increased the risk that security weaknesses, on the corporate network could affect control systems networks and we determined that the control systems were at increased risk of unauthorized modification or disruption by both internal and external threats. These deficiencies placed TVA at increased and unnecessary risk of being unable to respond properly to a major disruption resulting from an intended or unintended cyber incident, which could then, in turn, affect the agency’s operations and its customers. Vulnerabilities in the form of inadequate information system controls have been found repeatedly in our prior reports as well as IG and agency reports. These weaknesses fall into five major categories of information system controls: (1) access controls, which ensure that only authorized individuals can read, alter, or delete data; (2) configuration management controls, which provide assurance that security features for hardware and software are identified and implemented and that changes to that configuration are systematically controlled; (3) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection; (4) continuity of operations planning, which provides for the prevention of significant disruptions of computer-dependent operations; and (5) an agencywide information security program, which provides the framework for ensuring that risks are understood and that effective controls are selected and properly implemented. Figure 4 shows the number of major agencies with weaknesses in these five areas. Over the last several years, most agencies have not implemented controls to sufficiently prevent, limit, or detect access to computer networks, systems, or information. Our analysis of IG, agency, and our own reports uncovered that agencies did not have adequate controls in place to ensure that only authorized individuals could access or manipulate data on their systems and networks. To illustrate, weaknesses were reported in such controls at 23 of 24 major agencies for fiscal year 2008. For example, agencies did not consistently (1) identify and authenticate users to prevent unauthorized access, (2) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate, (3) establish sufficient boundary protection mechanisms, (4) apply encryption to protect sensitive data on networks and portable devices, and (5) log, audit, and monitor security-relevant events. At least nine agencies also lacked effective controls to restrict physical access to information assets. We previously reported that many of the data losses occurring at federal agencies over the past few years were a result of physical thefts or improper safeguarding of systems, including laptops and other portable devices. In addition, agencies did not always configure network devices and services to prevent unauthorized access and ensure system integrity, patch key servers and workstations in a timely manner, or segregate incompatible duties to different individuals or groups so that one individual does not control all aspects of a process or transaction. Furthermore, agencies did not always ensure that continuity of operations plans contained all essential information necessary to restore services in a timely manner. Weaknesses in these areas increase the risk of unauthorized use, disclosure, modification, or loss of information. An underlying cause for information security weaknesses identified at federal agencies is that they have not yet fully or effectively implemented key elements for an agencywide information security program. An agencywide security program, required by the Federal Information Security Management Act, provides a framework and continuing cycle of activity for assessing and managing risk, developing and implementing security policies and procedures, promoting security awareness and training, monitoring the adequacy of the entity’s computer-related controls through security tests and evaluations, and implementing remedial actions as appropriate. Our analysis determined that 23 of 24 major federal agencies had weaknesses in their agencywide information security programs. Due to the persistent nature of these vulnerabilities and associated risks, we continued to designate information security as a governmentwide high-risk issue in our most recent biennial report to Congress; a designation we have made in each report since 1997. Over the past several years, we and the IGs have made hundreds of recommendations to agencies for actions necessary to resolve prior significant control deficiencies and information security program shortfalls. For example, we recommended that agencies correct specific information security deficiencies related to user identification and authentication, authorization, boundary protections, cryptography, audit and monitoring, physical security, configuration management, segregation of duties, and contingency planning. We have also recommended that agencies fully implement comprehensive, agencywide information security programs by correcting shortcomings in risk assessments, information security policies and procedures, security planning, security training, system tests and evaluations, and remedial actions. The effective implementation of these recommendations will strengthen the security posture at these agencies. In addition, the White House, the Office of Management and Budget (OMB), and certain federal agencies have continued or launched several governmentwide initiatives that are intended to enhance information security at federal agencies. These key initiatives are discussed below. ● Comprehensive National Cybersecurity Initiative: In January 2008, President Bush began to implement a series of initiatives aimed primarily at improving the Department of Homeland Security and other federal agencies’ efforts to protect against intrusion attempts and anticipate future threats. While these initiatives have not been made public, the Director of National Intelligence stated that they include defensive, offensive, research and development, and counterintelligence efforts, as well as a project to improve public/private partnerships. ● The Information Systems Security Line of Business: The goal of this initiative, led by OMB, is to improve the level of information systems security across government agencies and reduce costs by sharing common processes and functions for managing information systems security. Several agencies have been designated as service providers for IT security awareness training and FISMA reporting. ● Federal Desktop Core Configuration: For this initiative, OMB directed agencies that have Windows XP deployed and plan to upgrade to Windows Vista operating systems to adopt the security configurations developed by the National Institute of Standards and Technology, Department of Defense, and Department of Homeland Security. The goal of this initiative is to improve information security and reduce overall IT operating costs. ● SmartBUY: This program, led by the General Services Administration, is to support enterprise-level software management through the aggregate buying of commercial software governmentwide in an effort to achieve cost savings through volume discounts. The SmartBUY initiative was expanded to include commercial off-the-shelf encryption software and to permit all federal agencies to participate in the program. The initiative is to also include licenses for information assurance. ● Trusted Internet Connections Initiative: This is an effort designed to optimize individual agency network services into a common solution for the federal government. The initiative is to facilitate the reduction of external connections, including Internet points of presence, to a target of 50. We currently have ongoing work that addresses the status, planning, and implementation efforts of several of these initiatives. In summary, the threats to federal information systems are evolving and growing, and federal systems are not sufficiently protected to consistently thwart the threats. Unintended incidents and attacks from individuals and groups with malicious intent, such as criminals, terrorists, and adversarial foreign nations, have the potential to cause significant damage to the ability of agencies to effectively perform their missions, deliver services to constituents, and account for their resources. Opportunities exist to improve information security at federal agencies. The White House, OMB, and certain federal agencies have initiated efforts that are intended to strengthen the protection of federal information and information systems. Until such opportunities are seized and fully exploited, and agencies fully and effectively implement the hundreds of recommendations by us and by IGs to mitigate information security control deficiencies and implement agencywide information security programs, federal information and systems will remain vulnerable. Chairwoman Watson, this concludes my statement. I would be happy to answer questions at the appropriate time. If you have any questions regarding this report, please contact Gregory C. Wilshusen, Director, Information Security Issues, at (202) 512-6244 or [email protected]. Other key contributors to this report include Charles Vrabel (Assistant Director), Larry Crosland, Neil Doherty, Rebecca LaPaze, and Jayne Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business. It is especially important for government agencies, where maintaining the public's trust is essential. The need for a vigilant approach to information security has been demonstrated by the pervasive and sustained computerbased (cyber) attacks against the United States and others that continue to pose a potentially devastating impact to systems and the operations and critical infrastructures that they support. GAO was asked to describe (1) cyber threats to federal information systems and cyberbased critical infrastructures and (2) control deficiencies that make these systems and infrastructures vulnerable to those threats. To do so, GAO relied on its previous reports and reviewed agency and inspectors general reports on information security. Cyber threats to federal information systems and cyber-based critical infrastructures are evolving and growing. These threats can be unintentional and intentional, targeted or nontargeted, and can come from a variety of sources, such as foreign nations engaged in espionage and information warfare, criminals, hackers, virus writers, and disgruntled employees and contractors working within an organization. Moreover, these groups and individuals have a variety of attack techniques at their disposal, and cyber exploitation activity has grown more sophisticated, more targeted, and more serious. As government, private sector, and personal activities continue to move to networked operations, as digital systems add ever more capabilities, as wireless systems become more ubiquitous, and as the design, manufacture, and service of information technology have moved overseas, the threat will continue to grow. In the absence of robust security programs, agencies have experienced a wide range of incidents involving data loss or theft, computer intrusions, and privacy breaches, underscoring the need for improved security practices. These developments have led government officials to become increasingly concerned about the potential for a cyber attack. According to GAO reports and annual security reporting, federal systems are not sufficiently protected to consistently thwart cyber threats. Serious and widespread information security control deficiencies continue to place federal assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. For example, over the last several years, most agencies have not implemented controls to sufficiently prevent, limit, or detect access to computer networks, systems, and information, and weaknesses were reported in such controls at 23 of 24 major agencies for fiscal year 2008. Agencies also did not always configure network devices and service properly, segregate incompatible duties, or ensure that continuity of operations plans contained all essential information. An underlying cause for these weaknesses is that agencies have not yet fully or effectively implemented key elements of their agencywide information security programs. To improve information security, efforts have been initiated that are intended to strengthen the protection of federal information and information systems. For example, the Comprehensive National Cybersecurity Initiative was launched in January 2008 and is intended to improve federal efforts to protect against intrusion attempts and anticipate future threats. Until such opportunities are seized and fully exploited and GAO recommendations to mitigate identified control deficiencies and implement agencywide information security programs are fully and effectively implemented, federal information and systems will remain vulnerable.
|
In 1991, Congress passed the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA), which added Section 28 to the Federal Transit Act. ISTEA required FTA to establish a state-managed safety and security oversight program for rail transit agencies. As a result, on December 27, 1995, FTA published a set of regulations, called Rail Fixed Guideway Systems; State Safety Oversight (subsequently referred to as FTA’s rule), to improve the safety and security of rail transit agencies. FTA’s rule required state oversight agencies to have approved transit agencies’ safety plans by January 1, 1997, and security plans by January 1, 1998. At the time of the FTA rule’s publication, 5 of 19 states affected by the FTA rule had oversight programs in place for rail transit safety and security, and no oversight agency met all the requirements in FTA’s rule. During the first few years of implementation, FTA worked with states to develop compliant programs that addressed FTA’s requirements. Ten years after FTA promulgated the initial rule, FTA published a revision to it in the Federal Register on April 29, 2005, which required that oversight agencies had to comply with the revised FTA rule by May 1, 2006. FTA relies on staff in its Office of Safety and Security to lead the State Safety Oversight program—and hired the current Program Manager in March 2006. This manager is also responsible for other safety duties in addition to the State Safety Oversight program. Additional FTA staff within the Office of Safety and Security assist with outreach to transit and oversight agencies and additional tasks. FTA regional personnel are not formally involved with the program’s day-to-day activities, though officials from FTA Regional Offices help address specific compliance issues that occasionally arise and help states with new transit agencies establish new oversight agencies. FTA also relies on contractors to do many of the day- to-day activities, ranging from developing and implementing FTA’s audit program of state oversight agencies to developing and providing training classes on system safety. The revised FTA rule applies to all states with rail fixed guideway systems operating in their jurisdictions. As specified in the FTA rule, a rail fixed guideway system is defined as: “any light, heavy, or rapid rail system; monorail, inclined plane, funicular, trolley, or automated guideway that is not regulated by FRA and is included in FTA’s calculation of fixed guideway route miles or receives funding under FTA’s formula program for urbanized areas (49 U.S.C. 5336); or has submitted documentation to FTA indicating its intent to be included in FTA’s calculation of fixed guideway route miles to receive funding under FTA’s formula program for urbanized areas (49 U.S.C. 5336).” Figure 1 shows examples of the types of rail systems that are included in the State Safety Oversight program. FTA’s program generally differs from other agencies within DOT, such as the Federal Aviation Administration (FAA), FRA, and PHMSA. These agencies promulgate their own technical standards that govern how vehicles or facilities must be operated or constructed, while FTA does not prescribe technical standards, though the state oversight agencies can develop technical standards. FTA designed the State Safety Oversight program as one in which FTA, other federal agencies, states, and rail transit agencies collaborate to ensure the safety and security of rail transit systems. Under the program, FTA is responsible for developing the regulations and guidance governing the program, auditing state safety oversight agencies to ensure the regulations are enforced, and providing technical assistance and other information; FTA provides funding to oversight agencies in only limited instances under the program. State oversight agencies directly oversee the safety and security of rail transit systems by reviewing safety and security plans, performing audits, and investigating accidents. Rail transit agencies are responsible for developing safety and security plans, reporting incidents to the oversight agencies, and following all other regulations state oversight agencies set for them. In addition to FTA, federal agencies such as FRA, DHS’s Office of Grants and Training, and TSA also have regulatory or funding roles related to rail transit safety and security. FTA officials stated that they used a multi-agency system-safety approach in developing the State Safety Oversight program. Federal and state agencies and rail transit agencies collaborate to ensure the rail transit system is operated safely and each of these agencies has some monitoring responsibility, either of themselves or another entity. FTA oversees and administers the program. As the program administrator, FTA is responsible for developing the rules and guidance that state oversight agencies are to use to perform their oversight of rail transit agencies. FTA also is responsible for informing oversight and transit agencies of new program developments, facilitating and informing the transit and oversight agencies of training available through FTA or other organizations, facilitating information sharing among program participants, and providing technical assistance. FTA officials stated they emphasize that components of a risk- management approach to safety and security, such as hazard analysis and risk-mitigation procedures, are included in the program standard that each state oversight agency issues to the transit agencies they oversee. This is consistent with our position that agencies make risk-based decisions on where their assets can best be used, both in transportation security and safety. However, FTA recognizes that some parts of the State Safety Oversight program are not risk-based, including requiring minimum standards for all transit agencies in the program, no matter their size or ridership. While FTA officials stated that FTA does not inspect transit agencies with regard to safety, it is responsible for ensuring that, through reviews of oversight agency reports and audits, state oversight agencies comply with the program requirements. For example, according to the FTA rule, when a state proposes to designate an oversight agency, FTA may review the proposal to ensure the designated agency has the authority to perform the required duties without any apparent conflicts. FTA has recommended in two instances that a state choose a different agency because the oversight agency that the state proposed appeared to be too closely affiliated with the transit agency and did not appear to be independent. In addition, FTA is responsible for reviewing the annual reports oversight agencies submit. FTA officials ensure they include all the required information—such as descriptions of program resources, and causes of accidents and collisions; they then compile this information for a program annual report, and look for industry-wide safety and security trends or problems. Furthermore, FTA is responsible for performing audits of oversight agencies to ensure they are complying with program requirements and guidance. FTA audits evaluate how well an oversight agency is meeting the requirements of the FTA rule, including whether or not the oversight agency is investigating accidents properly, if it is conducting its safety and security reviews properly, and if it is reporting to FTA all the information that is required. Finally, FTA does not provide funding to states for the operation of their oversight programs. However, states may use FTA Section 5309 (New Starts program) funds—normally used to pay for transit-related capital expenses—to defray the cost of setting up their oversight agency before a transit agency begins operations. Also, FTA officials stated this year that FTA used a portion of the funding originally designated for FTA audits to pay for one person from each oversight agency to attend training on the revisions to FTA’s rule, which oversight agencies had to comply with by May 1, 2006. In the State Safety Oversight program, state oversight agencies are responsible for directly overseeing rail transit agencies. According to the FTA rule, states must designate an agency to perform this oversight function at the time FTA enters into a grant agreement for any “New Starts” project involving a new rail transit system, or before the transit agency applies for FTA formula funding. States have designated several different types of agencies to serve as oversight agencies. Most frequently—in 17 cases—states have designated their departments of transportation to serve in this role. In three instances—California, Colorado, and Massachusetts—states have designated utilities commissions or regulators to oversee rail transit safety and security. According to state officials, since these bodies already had regulatory and oversight authority, it was a natural extension of their powers to add rail transit oversight to their responsibilities. Two states have designated emergency management or public safety departments to oversee their rail transit agencies. Officials in one state, Illinois, have designated two separate oversight agencies, both local transportation funding authorities, to oversee the two rail transit agencies operating in the state. In the Washington, D.C. (District of Columbia), region, the rail transit system runs between two states and the District of Columbia. These states and the District of Columbia established the Tri-State Oversight Committee as the designated oversight agency. Finally, one state, New York, has given its oversight authority to its Public Transportation Safety Board (PTSB). PTSB officials said they have authority similar to the public utilities commissions discussed above, but have no other mission than ensuring and overseeing transit safety in New York. See appendix II for a table showing each oversight agency and the rail transit agencies they oversee. The individual authority each state oversight agency has over transit agencies varies widely. While FTA’s rule gives state oversight agencies authority to mandate certain rail safety and security practices as the oversight agencies see fit, it does not give the oversight agencies authority to take enforcement actions, such as fining rail transit agencies or shutting down their operations. However, we found five states where the states granted their oversight agencies some punitive authority over the rail transit agencies they oversee. Officials from oversight agencies that have the authority to fine or otherwise punish rail transit agencies all stated that they rarely, if ever, use that authority, but each stated that they believed it gives their actions extra weight and forced transit agencies to acquiesce to the oversight agency more readily than they otherwise might. The majority of oversight agencies, 19 of the 24 with which we spoke, have no such punitive authority, though officials from some oversight agencies stated they may be able to withhold grants their oversight agencies provide to the transit agencies they oversee. Although officials from several of these agencies stated that they believe they would be more effective if they did have enforcement authority, under the current program this authority would be granted by individual states. While the states have designated a number of different types of agencies with varying authority to oversee transit agencies, FTA has a basic set of rules it requires each oversight agency to follow. In the program, oversight agencies are responsible for the following: Developing a program standard that outlines oversight and rail transit agency responsibilities, providing “guidance to the regulated rail transit properties concerning processes and procedures they must have in place to be in compliance with the State Safety Oversight program.” Reviewing transit agencies’ safety and security plans and annual reports. Conducting safety and security audits of rail transit agencies on at least a triennial basis. Tracking findings from these audits to ensure they are addressed, and tracking and eliminating hazardous conditions that the transit agency reports to the oversight agency outside the audit process. Investigating accidents that meet a certain damage or severity threshold and developing a corrective action plan for the causes leading to the accident. Submitting an annual report to FTA detailing their oversight activities, including results of accident investigations and the status of ongoing corrective actions. FTA’s rule also lays out several specific requirements that oversight agencies must require transit agencies to follow, such as developing separate system safety and security plans, performing internal safety and security audits over a 3-year cycle, developing a hazard management process, and reporting certain accidents to oversight agencies within 2 hours. The locations and types of transit agencies participating in the program are shown in figure 2. In addition to FTA, the state oversight agencies, and the rail transit agencies, two entities within DHS are involved in transit safety security. The Aviation and Transportation Security Act (ATSA), passed by Congress in response to the September 11, 2001, terrorist attacks, gave TSA authority for security over all transportation modes, including authority to issue security regulations. While TSA’s most public transportation security duties are its airport screening activities, TSA has taken steps to enhance all rail security, including rail transit. For example, in May 2004, TSA issued security directives to rail transit agencies to ensure all agencies were implementing a consistent baseline of security. Also, TSA has hired 100 rail security inspectors, as authorized by Congress. While the exact responsibilities of the inspectors are still being determined, a TSA official stated that they will monitor and enforce compliance with the security directives by passenger rail agencies, as well as increase security awareness among rail transit agencies, riders, and others. In contrast to the enforcement role of TSA, another DHS agency, the Office of Grants and Training plays a role in ensuring rail transit security through supporting security initiatives. The Office of Grants and Training (formerly known as the Office of Domestic Preparedness) is the primary federal source of security funding for rail transit systems, as well as for state and local jurisdictions; this security funding goes toward the purchase of equipment, support planning and the execution of exercises, and the provision of technical assistance to prevent, prepare for, and respond to acts of terrorism. The Office of Grants and Training has provided over $320 million to rail transit providers through the Urban Area Security Initiative and Transit Security Grant Program. FRA, within DOT, also plays a role in ensuring transit agencies operate safely. In general, FRA exercises its jurisdiction over parts of a rail transit system that share track with the general railroad system, or places where a rail transit system and the general railroad system share a connection (e.g., a grade crossing). According to FRA, if a rail transit vehicle were to operate on the same tracks and at the same time as general railroads, this would make the rail transit agency operating the vehicle use much sturdier (and more expensive) vehicles. Therefore, 11 rail transit agencies have requested waivers from FRA and, according to an FRA official, as of June 2006, FRA granted waivers to 10 of the 11 rail transit agencies that applied for them. Finally, NTSB also plays a role in enhancing and ensuring rail transit safety, though it has no formal role in FTA’s oversight program. NTSB has authority to investigate accidents involving passenger railroads, including rail transit agencies. NTSB officials stated they generally will investigate only the more serious accidents, such as those involving fatalities or injuries, or those involving recurring safety issues. Often, NTSB accident investigations of rail transit accidents will result in recommendations to federal agencies or rail transit agencies to eliminate the condition that led to the accident. The majority of officials from transit and oversight agencies with whom we spoke agreed that the State Safety Oversight program improves safety and security in their organizations. These officials provided illustrations about how the program enhanced safety or security; however, they have limited statistical evidence that the oversight program improved safety or security. FTA has obtained a variety of information on the program from sources such as national transit data, annual reports from oversight agencies, and its own audits of the oversight agencies. However, these data are not linked to any program goals or performance measures. FTA officials recognize the need for performance measures for its safety and security programs and are taking steps in 2006 to begin to address this need. Finally, although FTA expected to audit the oversight agencies every 3 years, it has not conducted these audits as frequently as it had planned (it has conducted eight audits since September 2001). However, program officials stated they are committed to getting “back on track” to meet the planned schedule. Both transit agency and oversight agency officials state that FTA’s State Safety Oversight program is worthwhile and valuable because it helps them maintain and improve safety and security. Of the 37 transit officials with whom we spoke, 35 believe the program that oversees their safety and security is worthwhile. One transit agency official explained that the oversight agency helps them identify larger, systemic issues. In addition, the program provides support to exert extra influence on a transit agency’s board of directors or senior management to get safety or security improvements implemented faster and improve the safety and security of their equipment. For example, one oversight agency helped its transit agency’s safety department address problems with train operators running red light signals by helping convincing the transit agency’s senior management to replace all signals with light-emitting diode (LED) signals that were brighter and more visible. Finally, transit agency officials believe that FTA’s program is an effective method for overseeing safety and security. Several officials said that they felt having a state or local (rather than national) oversight agency facilitated ongoing safety and security improvements and consistent working relationships with the oversight staff. In addition to transit agency officials, officials from 23 of the 24 state safety oversight agencies with whom we spoke believed that the State Safety Oversight program is valuable or very valuable for improving transit systems’ safety and security. Several officials commented that the program provides an incentive to examine safety and security issues and avoid complacency. Furthermore, several officials commented that they believed the current system worked well and that the program provides consistency, endowing the state safety oversight agencies with enough authority to accomplish their tasks. Also, officials said that having the states carry out the program provides on-going oversight in addition to formal audits, which helps maintain a constant oversight of safety and security issues. Finally, several transit and oversight agency officials stated that, because they were subject to oversight, they believed they saw improved safety in their rail system, but it was difficult to show statistics proving this. For example, the California oversight agency found an 87 percent drop in rail transit collisions at the San Francisco transit agency (MUNI) from 1997, when the oversight agency began oversight, to 2005. Although FTA changed its definition of a reportable accident during this time period— making it impossible to determine exactly what impact external oversight had on MUNI safety—both MUNI and the oversight agency staff stated they were confident the oversight efforts had been a major factor in reducing accidents. APTA officials with whom we spoke were concerned that, although the State Safety Oversight program contains minimum requirements for safety and security, the previous industry-regulated approach encouraged industry officials to surpass minimum standards and continue striving for improved safety and security. However, transit officials with whom we spoke often discussed the benefits of a federal program. In addition, officials from 17 transit agencies reported that their respective state safety oversight agencies imposed requirements above those required in FTA’s requirement. One potential source of information about the State Safety Oversight program’s impact on safety and security are data that FTA collects through the annual reports it requires state oversight agencies to submit. The reports include information on many different issues including program resources, accidents, fatalities, injuries, hazardous conditions, and any corrective actions taken resulting from audits or accident investigations. FTA officials stated they have used the oversight agencies’ reports to publish their own annual reports on transit safety; however, the information was not tied to any program goals or performance measures. In addition, the 2003 report is the most recent one FTA has issued. According to program officials, FTA has recognized the need for better information and performance measures for its safety and security programs and has not published a report since the 2003 report because it has been looking into improving the type of safety and security data it can collect, and how it can use the information to track program performance and progress toward yet to be defined goals. FTA’s 2006 business plan for its Safety and Security Division includes a goal to continue developing and implementing a data-driven performance analysis and tracking system to help ensure management decisions are informed by data and focus on performance and accountability. As part of these efforts, FTA officials explained they are working with a contractor who is working with oversight and transit agencies to identify measures that they can use to develop performance measures for the State Safety Oversight program. Another source of information is the audits of the oversight agencies that FTA had planned to conduct every 3 years. However, the agency has not met this schedule. Although the audits provide detailed information on specific oversight agencies, FTA has not brought together information from these audits to provide information on the safety and security of transit systems across the country. FTA tracks the deficiencies and areas of concern and follows up with oversight agency staff to assure that each state safety oversight agency resolves the suggested corrective actions. Given this lack of consistent audits, we are unsure if FTA has obtained enough information to provide a current picture of transit system safety and security, or a framework to identify potential challenges that oversight and transit agency officials may face in implementing the program. FTA has audited each state oversight agency that existed prior to 2004 at least one time since the program began; two agencies were audited twice. However, FTA largely discontinued the audit program after the September 11, 2001, terrorist attacks and acknowledged that the agency’s priorities shifted in the wake of the terrorist attacks. However, officials indicated they continued to evaluate the readiness of rail transit projects to safely and securely enter operations. In addition, according to FTA officials, FTA is not conducting audits in fiscal year 2006 so it can use the money and time to help states comply with the revised FTA rule, and has planned a detailed outreach effort—including a workshop for oversight agency officials—to help ensure compliance. FTA plans to return to its triennial audit schedule in fiscal year 2007, with 10 audits scheduled for the first year to get back on the triennial schedule. Despite the program’s popularity with participants, FTA faces challenges in implementing the program’s revised rule and continuing to manage the program. First, several oversight agency officials stated they are not confident they have adequate numbers of staff to effectively oversee rail transit system safety and security, and they are unsure the current training available to them is sufficient. Also, we found the level of staffing and expertise of oversight agency staff varies widely across the country. A second challenge FTA faces in implementing the program is that many transit and oversight agency personnel are confused about how security issues in the program will be handled, and what agencies will be responsible for what actions, as TSA takes on a greater role in rail transit security. While a majority of both oversight and transit agency officials with whom we spoke endorsed the usefulness of the State Safety Oversight program, many of these same officials stated that they were unsure that they were adequately trained for their duties. Specifically, officials from 18 of 24 oversight agencies with which we spoke stated they believed additional training would help them provide more efficient and effective safety and security oversight. We found that the level of expertise of oversight agency staff varied widely across the country. For example, 11 of the 24 oversight agencies with which we spoke had oversight staff that had no career or educational background in transit safety or security. Conversely, another 11 oversight agencies required their staff to have certain levels of experience or education. For example, New York’s Public Transportation Safety Board requires its staff to have 5 years of experience in transit safety. According to some oversight agency officials who had no previous transit safety or security background, they had to rely on the transit agency staff they were overseeing to teach them about transit operations, safety, and security. These officials stated that if they left their positions, any new staff taking over for them would face a similar challenge. Therefore, several oversight agency staff cite the lack of a training curriculum for oversight staff as a challenge to their effectiveness. For example, officials from eight oversight agencies stated that the training they had received in transit operations, accident investigations, and other areas was beneficial, but they had not received any training on how to perform oversight functions. Although many oversight agency officials acknowledged that they felt the training that had been made available to them either by FTA, the Transportation Safety Institute (TSI), or the National Transit Institute (NTI) had been adequate, officials from 17 of 24 oversight agencies with whom we spoke stated that they were somewhat unsure of which courses they should take to be effective in their oversight role. Furthermore, although FTA provides training to state oversight agency staff (either on their own or through TSI), and encourages state oversight agencies to seek training opportunities, FTA does not pay staff to travel to these courses. Also, oversight agencies must pay their own tuition and travel expenses for courses not provided by FTA or TSI. Officials from 10 of the 24 oversight agencies with whom we spoke cited a lack of funds as one reason why they could not attend training they had hoped to attend. Also, officials from all 24 oversight agencies stated that, if FTA provided some funding for them to travel to training or paid tuition for training they wanted to attend, it would allow the oversight agencies to spend their limited resources on direct oversight activities, such as staff overtime, travel expenses to visit transit agencies, or hiring contractors. Several oversight agency officials also cited the example of other DOT agencies that provide free training or pay for state staff to travel to attend training. For example, 30 states participate in FRA’s State Rail Safety Participation Program. These states have inspectors who FRA has certified to enforce FRA safety regulations. FRA pays for their initial and ongoing classroom training and state staff’s travel to this training. In addition, the federal agency regulating pipelines, PHMSA, authorizes state-employed inspectors to inspect pipelines in many states. PHMSA also recently paid for two inspectors from each state to attend training when it instituted a new inspection approach. Officials from both FRA and PHMSA stated that providing funding to states to train their employees helps the federal agencies more effectively carry out their enforcement activities, easing the states’ burden of paying to enforce federal regulations. For the first time, FTA paid for oversight agencies’ personnel to travel to attend a special meeting in June 2006 in St. Louis, where FTA provided technical assistance and shared best practices in meeting the requirements of the revised FTA rule. FTA officials agree that they have not provided training specifically pertaining to oversight activities or provided a recommended training curriculum to oversight agencies, but stated that it would not be difficult to take these steps. FTA officials told us that they considered addressing the lack of consistency in oversight agency staff qualifications when they were revising FTA’s rule in 2005; however, they stated they did not have the legal authority to direct states to require certain education, experience, or certifications for oversight agency staff. Furthermore, these officials noted that, despite the lack of formal requirements, FTA checks to ensure oversight agency personnel are adequately trained during its audits, and have recommended in five instances that oversight agency staff take additional training. They also stated that FTA could issue guidance or recommendations to oversight agencies about the level of training their oversight staff should have. In addition to concerns about training, oversight agencies were unsure they had sufficient numbers of staff to adequately oversee a transit agency’s operations. Officials at 14 of 24 oversight agencies with whom we spoke stated that more staff would help them do their job more effectively. Officials from 11 oversight agencies told us they devoted the equivalent of less than one person working half-time on oversight, and, in some cases, described oversight as a “collateral duty.” See table 1 for the amount of personnel oversight agency representatives estimated their agencies dedicate to oversight responsibilities. While in some of these instances, the transit agencies overseen are small, some of the transit agencies with the highest ridership levels have similar levels of oversight. For example, one state that estimated it devotes 0.1 full-time equivalent (FTE) to oversight program functions is responsible for overseeing a major transit agency that averages nearly 200,000 daily passenger trips. This state supplements its staff time with the services of a contractor, mainly to perform the triennial audits of the transit agency. Also, one state that estimated devoting 0.5 FTE to oversight functions is responsible for overseeing five transit agencies (including two systems not yet in operation) in different cities, making it difficult to maintain active oversight when their responsibilities are so spread out. As FTA resumes its audit schedule, it would be practical for FTA to focus on this issue. (See app. II for information on estimated FTE and transit system information for each state safety oversight agency and related transit agency). Another challenge facing the program is how TSA and its rail inspectors might affect oversight of transit security. As I mentioned earlier, TSA has regulatory authority over transportation security, and, according to TSA officials, has hired 100 rail inspectors, who are to monitor and enforce compliance with rail security directives TSA issued in May 2004. However, of the officials at 24 oversight agencies with whom we spoke, officials at 20 agencies stated they did not have a clear picture of who was responsible for overseeing transit security issues. Similarly, officials at 14 of 37 transit agencies were also unsure of lines of responsibility regarding transit security oversight. Several state oversight agencies were particularly concerned that TSA’s rail inspectors would be duplicating their role in overseeing transit security. One oversight agency official stated it would be more efficient if TSA and oversight agency staff audited transit agencies’ security practices at the same time. TSA staff reported hearing similar comments from oversight agencies; FTA program staff and TSA rail inspector staff both indicate that they are committed to avoiding duplication in the program and communicating their respective roles to transit and oversight agency officials as soon as possible. However, as TSA is still developing its program, currently there is no formally defined role for TSA in the State Safety Oversight program, and TSA has not determined the roles and responsibilities for its rail inspectors. While FTA’s rule discusses requirements for a transit agency’s security plan, it does not discuss TSA’s specific role in the program, and both TSA and FTA officials state that exactly how TSA would participate in the program was still to be determined. However, the officials added that they are working together to ensure inspection activities are coordinated, thereby fostering consistency and minimizing disruption to rail transit agency operations. For example, in May 2006, TSA’s director of the rail inspector program reported that it had designated 26 rail inspectors as liaisons to state oversight agencies. Also, these TSA rail inspectors attended a training session where FTA presented information on the State Safety Oversight program, and they have contacted 13 oversight agencies to begin discussions on how they can coordinate activities. Mr. Chairman, this concludes my statement. I plan to include recommendations to address these challenges in the report we plan to issue next week. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact Katherine Siggerud at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony include Ashley Alley, Catherine Colwell, Colin Fallon, Michele Fejfar, Joah Iannotta, Stuart Kaufman, Joshua Ormond, Tina Paek, Stephanie Purcell, and Raymond Sendejas. Three rail fixed guideway transit systems in the United States—the Port Authority Transit Corporation (PATCO) in Philadelphia, MetroLink in St. Louis, and the Washington Metropolitan Area Transit Authority (WMATA) in Washington, D.C. (District of Columbia)—cross state lines and require the collaboration of multiple oversight agencies to run the State Safety Oversight program; alternatively, states can agree that one state will be responsible for oversight of the transit system. Each of these multi-state transit systems has a different structure to handle oversight responsibilities. The oversight programs in Philadelphia and St. Louis have both developed strategies to centralize decision making, streamline collaboration, and respond promptly to safety and security audit findings. In contrast, the Tri-State Oversight Committee (TOC), which serves as the oversight agency in the District of Columbia area, requires majority decision making by the six committee members of the agency, including at least one member from each jurisdiction. However, WMATA has experienced difficulty obtaining funding, responding to FTA information requests, and ensuring audit findings are addressed. Each multi-state oversight program varies in structure and each performs oversight responsibilities differently. In Philadelphia, authority to serve as the oversight agency was delegated to one of the two state agencies— namely, the Pennsylvania Department of Transportation (PennDOT) agreed to allow the New Jersey Department of Transportation (NJDOT) to serve as the sole oversight agency for the PATCO heavy rail transit line. MetroLink in St. Louis is subject to oversight from both Illinois (through the St. Clair County Transit District) and Missouri (through the Missouri Department of Transportation); the two organizations share oversight duties. Finally, TOC, which is composed of multiple representatives from each jurisdiction (including Virginia, Maryland, and the District of Columbia), provides oversight for WMATA. The PATCO Speedline is a heavy rail line serving about 38,000 riders daily and links Philadelphia to Lindenwold, New Jersey. Most of PATCO’s track is in New Jersey, and 9 of the 13 stations are in New Jersey. Until early 2001, safety and security oversight functions were shared by Pennsylvania and New Jersey through the Delaware River Port Authority (DRPA), a regional transportation and economic development agency serving both Southeastern Pennsylvania and southern New Jersey. When DRPA implemented organizational and functional changes, DRPA leadership no longer believed that DRPA could perform its role as the designated oversight agency without facing conflicting interests. As a result, Pennsylvania and New Jersey agreed to have NJDOT replace DRPA as the oversight agency. This arrangement allows the oversight agency to take corrective action without seeking additional levels of approval from Pennsylvania, although the oversight agency does keep Pennsylvania informed of its activities. Also, Pennsylvania provides some support to NJDOT by having PennDOT perform oversight functions for the stations, passageways, and concourses located in Pennsylvania. PennDOT reports any deficiencies or hazardous conditions that may be noted during the performance of oversight directly to New Jersey. Through meetings or other means of communication, the follow-up actions may be performed by the Pennsylvania oversight agency in a supporting role or directly by New Jersey. New Jersey currently devotes two full-time staff members and one part-time staff member to its oversight program, and while these staff members must oversee several transit systems, including PATCO, their sole responsibilities are for safety and oversight functions. The St. Louis MetroLink is a light rail line between Lambert–St. Louis International Airport in St. Louis and Scott Air Force Base outside Shiloh, Illinois. Service was initiated in 1993, at which time the system included about 16 miles of track in Missouri and about 1.5 miles of track in Illinois. Because so little track was in Illinois, Illinois officials agreed to allow the Missouri Department of Transportation to provide safety and security oversight for the entire system. However, in 2001, MetroLink opened a 17.4-mile extension in Illinois, which roughly equalized the amount of track in both states. Because of this, the states agreed that it was appropriate for Illinois to play a greater role in safety and security oversight, and Illinois designated the St. Clair County Transit District as its oversight agency. St. Clair is one of the few non-state-level agencies to be an oversight agency. The involvement of two separate oversight agencies could create challenges to effective implementation, but the agencies have taken steps to ensure close coordination. First, the Illinois and Missouri oversight agencies have agreed to use only one uniform safety and security standard across the entire MetroLink system. According to area officials, this arrangement creates consistency throughout the system and allows both agencies to perform their oversight functions in a consistent manner. In addition, the agencies use a single contractor who is responsible for the triennial audit. All other work is performed by the Illinois and Missouri oversight agencies. Finally, staff from the two oversight agencies coordinate very closely and each have centralized leadership. Specifically, there is one full-time employee in Missouri who devotes 90 percent of his time to safety and security oversight activities. Illinois has several employees who devote smaller percentages of their individual time to the program, but the Managing Director is primarily responsible for coordinating with Missouri. MetroLink, in turn, indicated that responding to state safety oversight directives is a priority, and the agency works quickly to implement changes. WMATA operates a heavy rail system within the District of Columbia, Maryland, and Virginia. The states and the District of Columbia decided to carry out their oversight responsibilities through a collaborative organization managed by TOC. TOC is composed of six representatives— two each from Maryland, Virginia, and the District of Columbia. All of the representatives have other primary duties, and their activities on TOC are collateral to these other daily duties, as is the case with staff at several other oversight agencies. TOC does not have any dedicated staff, and TOC members have limited rail operational experience. To gain access to additional experience and expertise in rail oversight, TOC contracts with a consultant to provide technical knowledge, perform required audits of WMATA, and ensure that audit recommendations are completed. In addition, TOC funding comes from, and must be approved by, each of the jurisdictions every year. The Washington Council of Governments processes TOC funds and handles their contracting procedures. These issues result in a lengthy process for TOC to receive its yearly funding and process its expenses. The State Safety Oversight programs in Philadelphia and St. Louis have attempted to streamline their decision making, while TOC has a more collaborative process. Philadelphia and St. Louis have both developed strategies to centralize decision making and streamline collaboration, albeit through different structures. Because Pennsylvania granted New Jersey the authority to act as the oversight agency for all of PATCO’s territory, PATCO only has to interact with one oversight agency’s staff. New Jersey also has in-house staff dedicated to the State Safety Oversight program, which helps to ensure continuity, facilitates communication, and provides PATCO with one set of contacts to work with on the implementation of any new safety or security processes. Although St. Louis has two agencies providing safety oversight, both oversight agencies have made it a priority to ensure that they are providing consistent information to the transit agency, and they are coordinating their activities so MetroLink is not burdened by multiple contacts about the same issue. To do this, the Missouri and Illinois representatives stay in close contact with each other. Both oversight agencies stated they have in-house staff dedicated to safety and security oversight, and the agencies have very good working relationships. Oversight agency staff admitted that St. Louis could face challenges in the future if staff turned over in either agency and new employees did not establish a similar working relationship. In addition, officials indicated that, if oversight agency staff had disagreements over safety or security standards, or how to enforce the existing standards, it would be highly problematic. However, officials in the Illinois and Missouri oversight agencies, as well as at MetroLink, thought that the current arrangements have produced one set of standards, good communication, and effective coordination. Both MetroLink and oversight agency staff in St. Louis credited each other with creating an environment where this system of having multiple oversight agencies could work well. In contrast, TOC has implemented a less streamlined process for making decisions, which, according to FTA and TOC officials, may have contributed to the difficulties it has had in responding to FTA information requests. On June 15, 2005, FTA notified TOC that it would perform TOC’s audit in late July 2005. FTA requested information prior to the audit to facilitate the time it spent on-site. TOC did not submit the requested State Safety Oversight program materials despite several FTA requests and an extension by FTA to move the audit to a later date. At the end of August, FTA initiated its audit even though it had not received requested information, but was not able to complete the audit until the end of September, when it received all requested materials. FTA’s Final Audit Report to TOC cited 10 areas for improvement and provided TOC 60 days to resolve these issues. According to FTA, TOC resolved one issue within the time period. FTA held a follow-up review with TOC in mid-March to check on the status of the remaining areas for improvement. As of June 2006, FTA was evaluating how many of the remaining audit findings remained open, although FTA stated that TOC had created a detailed set of internal operating procedures to address many of FTA’s findings and concerns. In addition, TOC representatives stated that some of the areas for improvement FTA found were complicated issues, such as reviewing WMATA’s accident investigation procedures and approving modifications, and could not be addressed within the 60 days FTA initially allowed. TOC staff emphasized that, although WMATA was sometimes slow to respond to TOC audit recommendations or information requests, they were pleased with their relationship with WMATA and that WMATA was responsive to TOC. Similarly, FTA officials stressed that they recognized and appreciated the effort TOC had undertaken in addressing FTA’s findings. TOC staff credited WMATA with helping TOC develop a matrix to track outstanding recommendations and agreeing to meet via conference call on at least a bi-weekly basis to ensure the issues are addressed. Also, TOC members stated that part of the reason they were slow to respond to FTA’s initial requests was that TOC had spent all its allocated funds for the year and, consequently, they had to temporarily stop working with the consultant who had conducted its audits of WMATA and maintained their files. According to TOC officials, since the process for acquiring additional funding would require approval from all three jurisdictions represented on TOC, it was not feasible to obtain additional funding quickly. In addition, TOC cannot take any action without a majority of its members, and at least one member from each jurisdiction, approving the action. Reaching such majority agreements can be time consuming since all members of TOC have other primary responsibilities. This is especially a concern when quick decisions are necessary, such as responding to FTA’s audit recommendations. TOC officials cited several challenges in accomplishing their mission, including lack of a dedicated and permanent funding source, the lengthy process required to obtain approval on planning and implementation of corrective actions, and limited staff time. They also stated that they believed TOC and WMATA receive more scrutiny than other transit and oversight agencies, due to their location in the District of Columbia, and their proximity to FTA’s headquarters staff. To address these challenges, the chair of TOC stated that she planned to spend additional time overseeing WMATA and was hoping to work to find ways to streamline the administrative and funding processes that TOC must navigate. Hiring a full-time administrator, or designating a TOC member to serve in a full- time capacity, could help solve some of these issues. However, funding this position could be a challenge, and the administrator would need to have decision-making authority to be effective and act quickly. agency (estimated FTE) (estimated FTE) and Transportation Department (0.5) Transit Authority (0.08) Commission (9.6) Transit (7) Metropolitan Transportation Authority (1.5) Municipal Railway (7) Inc. (0.9) Regional Transit District (N/A) Transit Authority (N/A) Commission (1.2) Transit District (1.25) Transportation (1) Authority (N/A) Transportation Authority (N/A) Regional Transit (0.85) Transportation (0.1) Rapid Transit Authority (6) Authority (11) Transportation (0.1) Regional Transit Authority (N/A) 25.3 agency (estimated FTE) (estimated FTE) Transportation (1.3) (5) Department of Telecommunication and Energy (2.67) Transportation Authority (5.1) Transportation (0.5) Corporation (1.1) Public Safety (0.1) Transit (1-1.5) Newark City Subway (0.5) Hudson-Bergen Light Rail (N/A) River Line (2) Transportation Safety Board (3.5) Transit (15) Niagara Frontier Transit Authority (0.5) Transportation (1) Regional Transit Authority (1.2) Transportation (1.2) (0.5) Pennsylvania Transit Authority (2) Allegheny County (0.3) Transit Authority (1) 0.3 agency (estimated FTE) (estimated FTE) Emergency and Disaster Management Agency (3) Highway and Transportation Authority Tren Urbano (1.6) (N/A) of Transportation (0.25) Rapid Transit Authority (N/A) Transit Authority (0.3) Transportation (0.4) Transit (0.25) Transit (0.75) Authority of Harris County (2) Utah Department of Transportation (0.8) Authority (1.5) Department of Transportation (0.35) Tacoma Link (N/A) Seattle Center Monorail (0.02) Transportation (0.3) (0.85) Illinois and Missouri St. Clair County Transit District (0.25-0.5) and Missouri Department of Transportation (0.9) Development Agency - St. Louis Metro (2) Transit Corporation (1) Metropolitan Area Transit Authority (1) 206.6 Because we were not able to pek with the overight gency, FTE das provided y FTA. nlinked passenger trip nd directionl rote mile repreent the totl for ll tem within trit gency. gency offici, the riderhip d preented in thiable repreent yer when the monoril wast of ervice for n extended period nd doe not reflect the normuse of the tem. In prior ye the ner of nnuanlinked passenger trip exceeded abt 2 million. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The U.S. rail transit system is a vital component of the nation's transportation infrastructure, carrying millions of people daily. Unlike most transportation modes, safety and security oversight of rail transit is the responsibility of state-designated oversight agencies following Federal Transit Administration (FTA) requirements. In addition, in 2001, Congress passed the Aviation and Transportation Security Act, giving the Transportation Security Administration (TSA) authority for security over all transportation modes, including rail transit. This testimony is based on ongoing work for this subcommittee's committee--the House Committee on Transportation and Infrastructure. I describe (1) how the State Safety Oversight program is designed; (2) what is known about the impact of the program on rail safety and security; and (3) challenges facing the program. I also provide information about oversight of transit systems that cross state boundaries. To address these issues, we reviewed program documents and interviewed stakeholders including officials from FTA, TSA, the National Transportation Safety Board, and the American Public Transportation Association. We also surveyed state oversight and transit agencies covered by FTA's program, interviewing 24 of the 25 oversight agencies and 37 of 42 transit agencies across the country. FTA designed the State Safety Oversight program as one in which FTA, other federal agencies, states, and rail transit agencies collaborate to ensure the safety and security of rail transit systems. FTA requires states to designate an agency to oversee the safety and security of rail transit agencies that receive federal funding. Oversight agencies are responsible for overseeing transit agencies, including reviewing transit agencies' safety and security plans. While oversight agencies are to include security reviews as part of their responsibilities, the TSA also has security oversight authority over transit agencies. Officials from 23 of the 24 oversight agencies and 35 of the 37 transit agencies with whom we spoke found the program worthwhile. Several transit agencies cited improvements through the oversight program, such as reductions in derailments, fires, and collisions. While there is ample anecdotal evidence suggesting the benefits of the program, FTA has not definitively shown the program's benefits and has not developed performance goals for the program, to be able to track performance as required by Congress. Also, because FTA was reevaluating the program after the September 11, 2001, terrorist attacks, FTA did not keep to its stated 3-year schedule for auditing state oversight agencies, resulting in a lack of information to track the program's trends. FTA officials recognize it will be difficult to develop performance measures and goals to help determine the program's impact, especially since fatalities and incidents involving rail transit are already low. However, FTA has assigned this task to a contractor and has stated that the program's new leadership will make auditing oversight agencies a top priority. FTA faces some challenges in managing and implementing the program. First, expertise varies across oversight agencies. Specifically, officials from 16 of 24 oversight agencies raised concerns about not having enough qualified staff. Officials from transit and oversight agencies with whom we spoke stated that oversight and technical training would help address this variation. Second, transit and oversight agencies are confused about what role oversight agencies are to play in overseeing rail security, since TSA has hired rail inspectors to perform a potentially similar function, which could result in duplication of effort.
|
The naval shipyards are highly industrialized, large-scale operations that provide maintenance for ships and submarines. The naval shipyards are essential to national defense and fulfill the legal requirement for the Department of Defense to maintain a critical logistics capability that is government owned and operated to support an effective and timely response for mobilization, national defense contingency situations, and other emergency requirements. The naval shipyards were designed to build wind- and steam-powered ships, which reduces their efficiency in repairing today’s modern nuclear-powered ships. They range in age from 109 years to 250 years (see figure 1). The naval shipyards provide depot-level maintenance, which involves the most comprehensive and time-consuming maintenance work, including ship overhauls, alterations, refits, restorations, nuclear refuelings, and deactivations—activities crucial to supporting Navy readiness. This maintenance is performed during periods designated in the Navy’s Optimized Fleet Response Plan, a carefully orchestrated operational schedule of maintenance, training, and deployment periods for the entire fleet. It is designed to maximize the fleet’s operational availability to combatant commanders while ensuring adequate time for training and maintenance. We reported in 2016 that successful implementation of the Optimized Fleet Response Plan depends, in part, on the shipyards completing maintenance on time and that maintenance delays reduce the time that ships are available for training and operations. This means it is essential to the Navy’s ability to maintain readiness and support operational needs that the shipyards be as efficient as possible. Capital investment refers to expenditures for shipyard facilities and equipment, including the repair, construction, and maintenance of real property, among other activities. Capital investment projects at Navy facilities are funded primarily through Military Construction and Operation and Maintenance appropriations. Military Construction projects are construction, development, conversion, or extension projects of any kind, including repair work. Military Construction appropriations are used to fund projects costing more than $1 million, while Operation and Maintenance funds are used for projects costing less than $1 million. Special projects are restoration and modernization projects with funded costs exceeding $750,000 in which the portion of work that is classified as construction is under $1 million. Operation and Maintenance funds are used for special projects. Equipment projects are those associated with the installation of equipment in facilities. Where non-structural work—including the provision of the equipment—is required on real property, the project is financed with funds supporting the procurement of the equipment. Where structural changes are required, those costs are classified and funded as construction. The Navy acknowledges that there has been a history of under- investment in shipyard restoration and modernization needs. Recognizing this issue, Congress passed a law in fiscal year 2007 that requires the Secretary of the Navy to invest in the capital budgets of the Navy depots a total amount equal to not less than 6 percent of the average total combined maintenance, repair, and overhaul workload funded at all the Navy depots for the preceding three fiscal years. In fiscal year 2008, the Navy committed to increased capital investment to comply with the law and to improve the overall material condition of these facilities. In 2013, pursuant to a statutory mandate, the Navy developed a plan to improve its shipyard facilities and estimated that it would take 17 years (until fiscal year 2030) to resolve the backlog of maintenance and infrastructure repair that existed at the time. Since fiscal year 2007, total shipyard capital investment has increased by about 35 percent in inflation-adjusted 2016 dollars, as shown in figure 2. Capital investment at the shipyards has increased at a pace similar to that of overall shipyard funding, which has increased by about 34 percent over the same period, after adjusting for inflation. A number of Navy organizations have a role in determining the level of capital investment to be made in the shipyards (see figure 3). The Office of the Chief of Naval Operations (OPNAV) allocates the funding for overall capital investment in the shipyards. Naval Sea Systems Command (NAVSEA) determines which capital investment projects are most critical to enable the shipyards to continue operations. Those projects are planned by personnel from Naval Facilities Engineering Command (NAVFAC), using funds provided by the Commander, Navy Installations Command (CNIC). The projects then go through the Shore Mission Integration Group process, led by CNIC, where they compete against other Navy priorities for funding. The group reviews proposed shipyard projects to determine whether they are necessary and appropriate and then prioritizes them. After all the proposed projects have gone through this process, the result is a ranked list of approved projects; the Navy then allocates funds for those projects in priority order until it reaches the funding level set by OPNAV. Although the Navy has committed to increasing shipyard capital investment and implementing improvement plans, the physical condition of the shipyards’ facilities remains poor according to Navy data, and the cost to address restoration and modernization backlogs is increasing. For example, we estimate that it will take at least 19 years to clear the backlog (through fiscal year 2036), 6 years longer than the Navy estimated in 2013. Meanwhile, the shipyards’ drydocks also require restoration and modernization. The average age of capital equipment at the shipyards exceeds its expected useful life, and the overall condition of this equipment may be deteriorating. GAO’s analysis of data on Navy facilities found that the average rating for the overall condition of facilities at the Navy’s four shipyards remains poor. Specifically, the shipyards’ average condition rating—which measures the physical condition of a facility—has remained essentially flat and in the “poor” category, with an average rating of 71 in fiscal year 2013 and an average rating of 72 at the end of fiscal year 2016 on the 100 point scale used by the Navy (see figure 4). Moreover, in fiscal year 2016 the Navy rated about 25 percent of all shipyard facilities below 60, and therefore categorized them as being in failing condition. Furthermore, as of fiscal year 2016, the Navy categorized one in every five failing naval shipyard facilities as a facility that was critical to accomplishing the shipyard’s repair mission. Navy data also suggest that the shipyards may have about 1.2 million square feet of condemned, uninhabitable, or otherwise unusable facility space. According to Navy data, four dozen shipyard buildings across the four shipyards have been condemned or are unusable for ship repair activities, including some in prime waterfront locations that shipyard officials said could be used to improve the efficiency of ship repair processes. Navy shipyard officials noted that the shipyards were not designed for their current mission and that the layout, size of facilities, pier space, utilities, and safety systems contribute to reducing the efficiency of the shipyards for repair work. The Navy has reported that the inefficiencies of the current layout limit the yards’ abilities to improve their cost and schedule performance. For example, shipyard officials at Puget Sound stated that workers conducting ship repair work cannot traverse one shop building end-to-end without changing floors. The Navy has other measures it uses to assess facilities, in particular the facility configuration rating. This rating measures the facility’s suitability to function as intended or required for its mission. However, we did not assess the average configuration of the shipyard facilities, because the Navy had not resolved an issue concerning the reliability of the configuration data that we identified in 2011. Specifically, the configuration rating in the Navy’s database defaults to 100 when no rating has been entered into the system. Our analysis of the Navy’s fiscal year 2016 configuration data showed that 928 of 1300, or 71 percent, of the facilities had ratings of 100. Shipyard officials told us that most of these ratings were likely the result of a default rating and did not represent actual assessments. As we previously described, this use of a default rating creates a false result that suggests these facilities are perfectly configured, when in reality their status has not been assessed or recorded in the Navy’s database. This false result also has the effect of underestimating shipyard restoration and modernization costs, since the configuration ratings are used to inform these estimates. Given these concerns about the reliability of the configuration ratings, we did not determine trends in the average configuration of shipyard facilities since the Navy began implementing its 2013 facilities plan. We recommended in 2011 that the Navy develop a plan to ensure the accuracy of its condition and configuration data, but as of July 2017 the Navy’s plan had not corrected the issue with the configuration data. We believe our earlier recommendation remains valid. The shipyard facilities’ restoration and modernization backlog has continued to grow over the past 5 fiscal years. The Navy defines its restoration and modernization backlog as the estimated costs to restore facilities degraded by inadequate sustainment, excessive age, natural disaster, fire, or accident, among other things; to renovate or replace existing facilities to implement new or higher standards or accommodate new functions; or to replace building components that typically last more than 50 years. According to CNIC estimates, the funding required to eliminate the facilities restoration and modernization backlog at the four shipyards increased by 41 percent between fiscal year 2011 and fiscal year 2016, from a $3.45 billion backlog to a $4.86 billion backlog. For comparison, CNIC officials told us that the entire Navy facilities restoration and modernization backlog over the same period increased at a slower pace of about 14 percent, from a backlog of $37.45 billion in fiscal year 2011 to a backlog of $42.87 billion in fiscal year 2016. Given the current average funding levels for capital facilities that the shipyards have received from the Navy of approximately $260 million per year, we calculated that it would take the Navy at least 19 years (through fiscal year 2036) to eliminate the $4.86 billion backlog of facilities restoration and modernization that the shipyards faced at the end of fiscal year 2016. This contrasts with the estimated 17 years (through fiscal year 2030) that the Navy estimated it would take to eliminate the shipyards’ restoration and maintenance backlog at the time it published its shipyard improvement plan in 2013. Further, NAVSEA officials told us that addressing this restoration and modernization backlog does not build additional shipyard capacity and capability—it only allows the shipyards to remain at their present levels of capacity and capability. Any new or emergent mission requirements would further increase the time required to clear the shipyards’ facilities restoration and maintenance backlog, according to NAVSEA officials. We found that the shipyards’ drydocks have a number of unaddressed restoration and modernization needs. Maintenance personnel use drydocks to safely access the underside of ships and submarines, and drydocks are among the most critical facilities at the shipyards. The shipyards rely on 18 aged drydocks to perform maintenance on the Navy’s current fleet of aircraft carriers and submarines. Our analysis of Navy data shows that the average age of Navy drydocks is about 89 years. The oldest drydock in current use was completed in 1891 and the newest was completed in 1962. Aging drydocks pose risks to the shipyards’ ability to perform their depot repair mission uninterrupted and ultimately to the Navy’s ability to provide required aircraft carrier and submarine presence to combatant commanders. These risks result from flooding and seismic vulnerabilities and the potential for aging drydocks to deteriorate, among other things. Examples of key drydock shortcomings identified by the Navy include obsolescence, flooding vulnerabilities, and seismic vulnerabilities. Workarounds for Drydock Obsolescence Three of Puget Sound’s six drydocks and one of Norfolk’s five drydocks require the use of superflooding as a workaround to service the Navy’s current fleet of submarines. Superflooding forces water into the drydock to raise the water level higher than the tides to obtain the necessary clearance for the submarine to move into the dock. According to shipyard officials at Puget Sound, superflooding can result in the flooding of drydocks’ electrical and service galleries (shown below), which were not designed to be flooded and therefore have to be repaired because of rust and seawater corrosion. Drydock obsolescence. Several of the shipyards’ drydocks are not able to support existing submarine classes, including the Los Angeles-class attack submarine. Other drydocks can support vessels only when assisted by particular equipment or environmental conditions such as tidal schedules (see sidebar). Our analysis shows that as the Navy retires existing aircraft carriers and submarines and replaces them with newer classes, the shipyards will become increasingly constrained in scheduling and performing maintenance using their existing drydocks. Only 11 of the 18 drydocks in use are configured to perform maintenance on the newer ship and submarine classes being procured by the Navy, such as the larger Ford-class aircraft carrier and the Virginia-class submarine (see figure 5). According to a June 2017 draft drydock study from the Navy, without making new investments in drydocks, the shipyards will increasingly encounter scheduling delays waiting for access to the 11 drydocks that are configured for the newer classes. Additionally, Drydock 1 at Portsmouth Naval Shipyard and Drydock 3 at Pearl Harbor require buoyancy assistance equipment to provide additional lift to reduce the submarine’s waterborne draft to move it into the drydock. In its 2017 draft drydock study, the Navy reports that, without the use of buoyancy assistance equipment, these two drydocks could no longer dock any of the Navy’s current submarines. While this workaround allows the shipyards to repair some current classes of submarines, the Navy’s study says it will not be sufficient in the future for newer classes. Additionally, shipyard officials said that Drydock 3 at Puget Sound can move Los Angeles class submarines in or out only after they have had several tons of weight removed and only during a high tide. This drydock is primarily used for submarine reactor compartment disposal. Flooding vulnerabilities. Four of Norfolk’s 5 drydocks face flooding threats from extreme high tides and storm swells and average one major flooding event per year. According to officials, drydock flooding during certain delicate depot maintenance tasks risks personnel safety, catastrophic damage to the ships being repaired, and potential environmental impacts. For example, the Navy reported in 2009 that a drydock at Norfolk required emergency repairs to prevent flooding while the USS Tennessee (SSBN-734) was undergoing maintenance. According to a 2009 Navy incident report, several days of high tides and winds, coupled with multiple leaks in the drydock’s granite block joints, resulted in the drydock flooding at an estimated rate of 3,000 gallons per minute before workers could repair it. Seismic vulnerabilities. The Navy’s drydocks were not designed to accommodate the risks posed by seismic events. For example, at Puget Sound Naval Shipyard—located in an area identified by the U. S. Geological Survey as a “High Seismic Hazard Zone”—a 7.0 magnitude or greater earthquake could damage or ruin the only drydock on the west coast that is capable of performing maintenance on aircraft carriers. As recently as 2001, the Puget Sound region experienced a 6.8 magnitude earthquake. According to Navy documentation, shipyards’ capital equipment is aging beyond its expected service life and its overall condition appears to be deteriorating, negatively affecting ship and submarine repair work. Capital equipment includes items such as shipyard cranes, sheet metal rollers, plasma cutters, and furnaces. In September 2016, an internal NAVSEA analysis showed that the average age of capital equipment at the four shipyards had risen to 22 years, which is beyond the 15-year average expected useful life that the Navy has calculated for capital equipment. We also observed aging equipment at all four shipyards, including submarine shaft lathes at Puget Sound that had entered service in the 1930s and a plate roller at Portsmouth that was built in the 1950s. This equipment was still being used to support maintenance on modern nuclear submarines and at times has created impediments to efficiently and effectively completing repair work, according to shipyard officials. Equipment that is beyond its useful life can be inefficient and unreliable, affecting the shipyards’ ability to conduct repair work. Our analysis of data on the repair of Navy equipment found that the number of requests for repair of shipyard equipment is trending upward, from about 13,400 in fiscal year 2008 to about 17,100 in fiscal year 2016, an increase of about 28 percent. This indicates that the shipyards may be incurring costs— such as additional labor hours and repair materials—associated with aging equipment. Moreover, the actual need for repairs may be greater than the number of repair requests indicates, according to shipyard officials, because shop level employees are reluctant to submit repair requests when there is little hope of obtaining funding for a repair. Unreliable equipment can also result in increased costs and re-work. For example, after it was discovered in 2015 that the analog controls on a furnace used to heat-treat submarine parts to withstand deep sea pressure were reading inaccurately, Norfolk officials were required to re- inspect 10 years’ worth of parts made in that furnace to ensure that they met stringent submarine safety requirements. The shipyards’ capital facilities and equipment are not fully meeting the Navy’s operational needs, in part due to their condition. Maintenance delays partially attributable to inadequate facilities and equipment at the shipyards have led to thousands of lost operational days for submarines and aircraft carriers over the last 16 fiscal years. In addition, the Navy estimates that its future needs will be increasingly affected by the capacity and capability limitations of the drydocks, even without factoring in the increase in fleet size—18 additional attack submarines and 1 additional aircraft carrier—called for in the Navy’s 2016 Force Structure Assessment. We found that the naval shipyards are not fully meeting the Navy’s current operational needs, in part due to the condition of their facilities and equipment. The shipyards’ ability to meet operational needs is measured by their ability to complete maintenance on time and adhere to the maintenance schedule laid out in the Navy’s Optimized Fleet Response Plan. As we previously reported, completing ship and submarine maintenance on time is essential to the Navy’s readiness. Maintenance availabilities that last longer than planned reduce the number of days ships are available for training or operations. When ships stay in the shipyards longer than anticipated, it can lead to a negative cyclic effect that affects other vessels in the fleet (see figure 6). Our analysis shows that facilities and equipment in poor condition can contribute to maintenance delays. Navy shipyard officials noted that there are numerous reasons why the maintenance on ships may be delayed— factors such as parts shortages, labor difficulties, changes in the planned maintenance work, and weather—but agreed that the condition of facilities and equipment is one of those reasons. Our analysis of Navy data shows that, in fiscal years 2000 through 2016, the shipyards completed maintenance periods on schedule only 47 percent of the time for aircraft carriers and 24 percent of the time for submarines (see figure 7). These overruns in maintenance periods resulted in at least 1,300 lost operational days—days that a ship is not available for operations—for aircraft carriers and about 12,500 days for submarines during fiscal years 2000 through 2016 (see figure 7). Our analysis of Navy maintenance data shows that delays in maintenance periods that began in fiscal year 2015 caused more than a year’s worth of lost operational days for aircraft carriers—the equivalent of losing the use of an aircraft carrier for more than a year. In addition to hindering efficient ship repair, inadequate facilities cost the Navy in other ways. The Navy reports that it has purchased or rented a large number of temporary facilities at every shipyard to provide them enough space to complete their mission, and the need for such temporary facilities is growing, according to shipyard officials. In its 2013 facilities improvement plan, the Navy identified 650 temporary shipyard structures across the four shipyards, comprising 561,466 square feet. As recently as February of 2017, Puget Sound Naval Shipyard alone reported that it had over 224,000 square feet of relocatable facilities onsite, including approximately 300 temporary trailers. Some “temporary” facilities at the shipyards have been used for decades, and others are double stacked because of a lack of space (see figure 8 for an example). We observed at Pearl Harbor and Puget Sound that the piers used for repair work can also be crowded, and personnel told us that they leave equipment outside because there is no covered storage space available. For example, shipyard officials at Puget Sound told us they currently have a storage space deficit of approximately 400,000 square feet. They reported that, as a result, they store millions of dollars’ worth of equipment and material outside around the shipyard, sometimes uncovered and exposed to the elements. Shipyard officials said this can reduce the lifespan of the equipment, particularly when it is exposed to the saltwater air, which increases the rate of corrosion. According to a 2017 draft Navy study, the current capacity and capability of the shipyards’ drydocks will not support future operational needs. The Navy projects that the shipyards will be unable to support 73—or about one-third—of 218 maintenance periods planned for the shipyards over the next 23 years, including 5 aircraft carrier and 50 submarine maintenance periods. However, this estimate identifies only maintenance periods missed as a result of drydock capacity and capability issues for the planned fleet of 11 carriers and 70 submarines through fiscal year 2040. NAVSEA officials said that other factors that contribute to missed maintenance periods, such as shipyard workload, workforce, or requirements growth, were not accounted for in this estimate. In its 2017 draft drydock study, the Navy reports that it currently has very little drydock capacity to surge depot-level work or deal with national security contingencies or unanticipated accidents, such as the USS Greeneville’s (SSN-772) collision with a Japanese fishing ship. This is because of the high demand for drydock space, which leaves the Navy with little time between scheduled maintenance periods to do other work. In its 2016 Force Structure Assessment, the Navy released a new force structure goal that called for achieving and maintaining a fleet of 355 ships—up from the previous goal of 308 ships in its 2015 assessment and the current inventory of 276. This assessment calls for increasing the number of planned aircraft carriers from 11 to 12 and the number of attack submarines from 48 to 66 (a 38 percent increase). This proposed increase in fleet size will aggravate shortfalls in drydock capacity, since an increase in the number of ships will lead to an increase in the volume of maintenance the shipyards must perform. In its 2017 draft drydock study, the Navy identified several key drydock shortfalls that hinder the shipyards’ ability to support future operational needs, as previously discussed. For example, none of the existing drydocks can support repairs for the new Ford-class aircraft carrier as the drydocks are currently configured. Specifically, Drydock 8 at Norfolk Naval Shipyard and Drydock 6 at Puget Sound Naval Shipyard—the two drydocks currently capable of supporting existing Nimitz-class aircraft carriers—require upgrades in salt water cooling utilities to support maintenance on the new Ford-class aircraft carriers, because the Ford- class carriers are larger and have different equipment. The Navy has plans to begin addressing this issue at Norfolk in fiscal year 2022 and at Puget Sound in fiscal year 2023. Navy officials told us that the Navy has not yet defined its needs for drydocks capable of supporting Ford-class aircraft carriers, but it will need at least one on each coast. Newer versions of the Virginia-class submarines will limit the number of drydocks able to perform maintenance in the future, thereby reducing the capacity available to the fleet. According to the 2017 draft Navy analysis, 17 of the shipyards’ 18 existing drydocks can support maintenance on the current Los Angeles-class attack submarine, and 14 of the 18 can accommodate the current versions of the Virginia-class attack submarine. However, only 11 of the 18 drydocks, in their current state, will be able to accommodate future versions of the Virginia-class submarine with the Virginia Payload Module because of its increased length and loading size. This drydock shortfall caused by the addition of this module is exacerbated at Pearl Harbor Naval Shipyard, where there is a drydock that can be divided in two to support simultaneous maintenance of either two Los Angeles-class or two current Virginia-class submarines. Future Virginia-class submarines with the module are so long that they require the full length of the drydock, thereby reducing the space available for maintenance at Pearl Harbor. Shipyard officials noted that this capability is regularly used to respond to immediate, short-term notice events, such as ships in need of emergency repairs. Though the Navy has developed detailed plans for capital investment in facilities and equipment at the shipyards that attempt to prioritize their investment strategies, this approach does not fully address the shipyards’ challenges, in part because the plans are missing key elements. Specifically, the Navy’s plans are missing capital investment goals that would help guide long-term planning, an accounting of all relevant costs, metrics that would allow an assessment of the effectiveness of capital investment spending, and regular management reviews to assess progress. Our previous work has shown that a comprehensive, results- oriented management approach that includes these elements can help organizations remain operationally effective, efficient, and capable of meeting future requirements. DOD has previously used approaches of this kind to address complex, long-standing management challenges, and the Navy’s plans have already incorporated some elements of this approach, such as a well-defined mission statement and a detailed discussion of the issues the plans are intended to address. However, without adopting a management approach for its capital investment needs that includes key results-oriented elements, the Navy risks continued deterioration at its shipyards, hindering its ability to efficiently and effectively support Navy readiness over the long term. Over the last several years, the Navy has developed three capital investment plans intended to help improve the state of the facilities and equipment at the shipyards. In 2013, the Navy released a plan to guide its capital investment for shipyard facilities. The plan discusses eliminating the restoration and modernization backlog of facilities projects and centralizing maintenance operations, among other things. Similarly, the Navy issued a plan to guide its investment in capital equipment in 2015. This plan was intended to identify and address all capital equipment needs. Finally, the Navy is currently working on a draft drydock plan that is intended to help prioritize the drydocks projects the Navy feels are necessary to meet upcoming scheduling challenges. That plan is still in draft form, though officials have told us that they expect it to be released this year. Though the Navy’s capital investment approach has resulted in the development of three capital investment plans, those plans are missing key elements, including the development of analytically-based goals that would help guide long-term planning, a full identification of the shipyards’ resource needs, metrics that would allow the Navy to assess the effectiveness of its capital investment spending in supporting the ability of the shipyards to meet operational needs, regular management reviews of progress, and reporting on progress to key Navy decision makers and Congress. Without incorporating these key results-oriented elements into their approach, the Navy may not be able to address the shipyards’ challenges, namely their poor condition, aging equipment, and mounting facility maintenance backlogs. The Navy’s capital investment plans for shipyard facilities and equipment do not include analytically-based results-oriented goals sufficient to support long-term planning. For example, the 2013 facilities improvement plan stated that it was designed to bring the condition of shipyard facilities up to an average condition rating of 75, to match the average Navy condition rating for facilities. However, the Navy chose this goal based on budget expectations rather than an engineering or operational analysis to determine the condition and configuration the shipyards needed to efficiently and effectively address current and future operational needs. Navy officials also told us that there are no Navy or DOD criteria for determining what constitutes effective and efficient shipyard facilities, although such criteria are available for more typical installation facilities, such as barracks or dining facilities. The Navy’s 2015 capital investment plan for equipment identified a desired outcome—to address “all” shipyard equipment requirements—but this desired outcome was not based on an analysis of what the shipyards needed to support the Navy’s operational goals. Navy officials stated that the goal of the plan was to reduce the average age of capital equipment by replacing older equipment with newer, modern versions. Over time, this would reduce the average age of capital equipment to better reflect the average expected service life of about 15 years. However, this goal was not provided in the plan, and there was no mention of alternate methods of assessing equipment condition to determine when it would require replacement. Similar to the 2013 facilities plan, the 2015 equipment plan focuses on financial inputs necessary to achieve improvements, instead of relying on an analytically-based objective, and does not specify when the objective of the plan will be fulfilled. We found no analytical basis to suggest that attaining the goals in the 2013 facilities plan and the 2015 equipment plan would allow the shipyards to efficiently and effectively support current or future operational needs. A results-oriented management approach calls for goals in order for the organization and any additional stakeholders to know what end-state they are trying to reach. These goals also inform other elements, such as the development of metrics to assess progress and the identification of necessary resources. Shipyard officials told us that the plans in place could be characterized as lists of projects desired, rather than effective end-state goals. In a results-oriented management approach, identifying a specific analytically-based goal or end state is essential for accurately determining the costs of achieving that goal, because different end-states could require different shipyard configurations—which in turn would require different facilities and equipment. These differing end states would also likely require different funding levels and timelines. According to the Navy, completing the projects identified to date would allow it to maintain current shipyard capabilities in a steady state. However, completion of these projects would not add to existing shipyard capacity or capability, aside from improvements identified as needed to accommodate new hull types in drydocks. Absent analytically-based goals defining the desired end state, there is no support that the current plan goals will efficiently and effectively meet the Navy’s operational needs. The Navy has not fully identified the resources necessary to achieve even the desired results expressed in its 2013 facilities plan and 2015 equipment plan. The Navy estimated funding needs for shipyard facilities, equipment, and drydocks in its 2013 facility plan, its 2015 equipment plan, and its 2017 draft drydock study. Altogether, these plans estimate that the Navy will need a total of at least $9.0 billion over the next 12 years—fiscal years 2018 through 2029—to improve the average condition of its shipyard facilities, address drydock needs, and begin to recapitalize its equipment (see figure 9). However, as we have discussed, these estimates are not derived from an analysis of what the shipyards require to efficiently and effectively meet current and future operational needs. We also found that the Navy’s estimates of its shipyard capital investment needs do not account for several potentially costly items, including planning costs and utilities modernization. In addition, the limited resources devoted to planning for shipyard improvements, combined with the generally poor condition and historic status of the shipyards, mean that even the existing estimates may be under stated. Identifying the necessary resources is essential in order to acquire and prioritize the use of those resources. Without identifying the full resources required to address the shipyard’s relevant needs and reach analytically-based goals, decision makers will lack the information needed to support deliberations and determine an appropriate level of resources to allocate to the naval shipyards. The Navy has estimated $3.6 billion in funding needs, or an average of $304 million per year, to address shipyard facility needs and bring shipyard facilities up to an average condition rating of 75, which is still considered “poor.” This amount exceeds the shipyards’ average yearly allotment for facilities of about $260 million by $44 million (a 17 percent increase). The Navy also estimates that it will need $2.0 billion over the next 12 years (or an average of $167 million per year) for capital equipment. This exceeds the average yearly capital equipment funding of about $50 million by about $117 million (a 234 percent increase). Finally, the Navy has estimated $3.4 billion in needs over the next 12 years ($4.1 billion total over 15 fiscal years) to begin mitigating its drydock shortfalls, but the amounts needed per year vary because of the need to accommodate ship maintenance schedules and complete large amounts of work in specific fiscal years. However, these estimates may under state some potentially costly elements. Planning costs have not been fully identified: The Navy has accounted for planning costs for some of its largest projects—those involving its drydocks—but has not calculated similar costs for the remainder of its future capital investment needs. According to Navy officials, in-depth planning and engineering are required to repair and modernize industrial facilities while allowing ongoing shipyard operations to proceed, ensuring that adequate preparations are made to support facility improvements and that the necessary utilities are in place, addressing potential historic or regulatory considerations, and ensuring that the location can support the project. The Navy projects that the planning costs associated with its drydock improvements will be at least $284 million over the 12 years—roughly 8 percent of the total project costs—although shipyard officials have noted that planning costs can easily exceed 10 percent of a project’s total cost. This suggests that the planning costs for the $3.6 billion in facilities projects identified by the Navy could increase the total cost of these projects by several hundred million dollars. Utilities modernization costs have not been not fully identified: NAVFAC has identified about $190 million in additional utilities projects through fiscal year 2023 that are not already in one of the other Navy plans, but it has not identified the improvements needed beyond fiscal year 2023. The Navy’s 2013 plan did not include the cost of modernizing utilities, though it noted that efforts were under way to develop cost estimates for their recapitalization. The Navy previously reported in its 2013 facilities plan that shipyards experienced unscheduled utility outages that can disrupt maintenance schedules and lead to increased fuel and labor costs. Navy officials acknowledge the ongoing need to modernize utilities and other wired systems at the shipyards. For example, the fire alarm systems at the shipyards continue to rely on the same bare-wire telegraph technology that was used in the 1800’s and early 1900’s, which is easily damaged and regularly elicits false alarms. However, according to officials, they have not determined the full amount of investment that would be required to modernize utilities at the shipyards to provide a stable electrical supply at the proper voltages with fewer unplanned outages. Regulatory compliance costs may be under stated: Shipyard facilities are subject to a variety of regulatory requirements, stemming from both DOD and statutory sources. Like other military installations, shipyards must comply with anti-terrorism, force protection, seismic, and building code requirements. However, given the limited resources devoted to planning, current plans to improve shipyard facilities, equipment, and drydocks do not address the effect of some of these statutory and regulatory requirements. Anti-terror and force protection, seismic, building codes, and other requirements to improve the health and safety of shipyard personnel can increase the amount of funding required to complete capital investment projects, particularly when compliance efforts overlap. For example, DOD regulations require that when the cost of a project reaches 30 percent of the replacement value of the facility, seismic assessments must be conducted, and if a project’s cost exceeds 50 percent of the facility’s replacement value, anti-terror and force protection measures must be included in the project. According to shipyard officials, the DOD requirements that must be met after a project exceeds the 30 percent threshold are sometimes costly enough to make it exceed the 50 percent threshold, and the results of overlapping requirements can be difficult to predict. This can result in the cost of relatively simple projects increasing significantly, as indicated by the example in figure 10, illustrating an actual project at Norfolk Naval Shipyard developed between fiscal years 2010 and 2015. Building 30 at Norfolk Naval Shipyard is used for engineering, and is over 120 years old. The costs to bring it up to modern building code were significant. This example may not be representative of the potential growth of project costs, but the age of shipyard facilities, the extent of historic designations, and the Navy’s acknowledged history of under investment at the shipyards highlight the potential for other shipyard facilities to encounter similar issues. This suggests that the Navy’s estimated restoration and modernization backlog of $4.86 billion may actually be under stated. Historic preservation costs may be under stated: Our analysis and Navy documents show that dealing with historic facilities also adds cost and complexity to planning for their restoration. The preservation, restoration, or demolition of historic buildings requires additional time and cost to plan, gain necessary approvals, and execute. In its 2013 facility plan, the Navy reported that approximately 70 percent of the shipyard infrastructure was designated as historic; all four shipyards have historic facilities, some because of the age of the facilities and some because of events that took place there. For example, the attack on Pearl Harbor during World War II has resulted in the designation of approximately 3 million square feet of its facilities as historic; this means that the footprint of the historic part of the shipyard exceeds that of its non-historic facilities. Shipyard officials told us that there are several facilities that might be used to more effectively support shipyard operations but that either they cannot be altered because they have been designated as historic or that alterations would require lengthy negotiations over the facilities’ historic designation. For example, officials at Pearl Harbor Naval Shipyard discussed a number of modernization efforts that might be undertaken to improve Navy capabilities in the Pacific but that cannot be completed because the facilities have been designated as historic. The Navy lacks metrics that would help determine the effectiveness of its capital investment plans to help guide long-term planning. With a comprehensive, results-oriented management approach, relevant metrics allow an organization to assess whether it is making progress toward meeting its goals. Our previous work has shown that a suite of metrics can help organizations to assess complicated issues, where one metric may be insufficient or may not capture all relevant information. Our work shows that the Navy tracks the performance of its shipyards in completing ship maintenance on budget and on time but not how facilities and equipment are supporting the shipyards’ performance. Specifically, we found that the shipyards’ primary performance metrics are tied to ship maintenance—for example, how quickly a ship is repaired, how much overtime is used, and cost and schedule overruns—and do not provide a sufficient basis for measuring the effect of capital investments on shipyard performance. The Navy also collects data on its facilities, including the aforementioned configuration and condition assessments, along with data on repairs, utilities outages, and maintenance response times. However, this information is not used to assess the efficacy of the capital investment program. For example, according to NAVSEA officials, the Navy does not monitor when facility or equipment issues contribute to schedule delays or increase maintenance costs (such as when equipment failures prevent work) or the costs associated with deferring investment (e.g., foregone efficiencies, costs to repair obsolete equipment, costs of workarounds, or temporary facility costs). However, we found that deferring investment can lead to decreased efficiency in other areas. For example, our analysis of equipment repair data found that equipment repair requests have been increasing and that this increase could reflect the effect of deferred maintenance. Alternatively, investments in modern equipment or facilities can increase efficiency, reduce costs, and improve morale (see sidebar). Until the Navy establishes appropriate metrics and other measures of progress for the shipyards, it will not know if it is reaching its previously developed goals. We found that the Navy does not conduct regular management reviews of activities and metrics that would measure progress toward meeting the goals of its various capital investment plans and encourage accountability and coordination among the stakeholders involved in planning for these capital investments. The Navy conducts annual assessments of capital investment projects as part of its normal budgeting and prioritization processes. However, officials state that they do not regularly review the implementation status of the 2013 facilities plan or the 2015 equipment plan. NAVSEA is responsible for identifying and prioritizing capital investment projects and overseeing the implementation of the 2013 facilities plan. However, according to NAVSEA officials, there is no formal requirement or system to actively manage the implementation of the 2013 facilities plan or to coordinate with shipyard stakeholders such as CNIC or NAVFAC. Officials state that they coordinate with stakeholders as necessary when projects associated with the capital investment plans experience problems but do not report to higher-level Navy decision makers and Congress on the progress in achieving specific objectives in capital investment plans, such as reducing the facilities restoration and modernization backlog, improving the condition and configuration of the shipyards, recapitalizing capital equipment, and reducing the effect that unimproved facilities and equipment have on maintenance delays. Progress is assessed annually during the programming and budgeting process, with emphasis on ensuring that the Navy meets its minimum capital investment requirements (the “6 Percent Rule”) under 10 U.S.C. § 2476. According to officials, lack of coordination between the shipyards and local NAVFAC personnel also can delay equipment upgrades if the utilities or facilities infrastructure fails to support the equipment (e.g., if equipment requires reinforced flooring or increased electrical supply). Some of the equipment being installed at shipyards can require extensive modifications to the facilities. For example, we observed a foundation being prepared for a new piece of equipment at Norfolk Naval Shipyard, shown in figure 11. We have found that, with a results-oriented management approach, regular management review allows an organization to assess progress by reviewing metrics, ensure that all stakeholders are working together effectively, and respond to implementation challenges. Given the need to coordinate stakeholders such as NAVSEA, CNIC, NAVFAC, Navy Regional Installation Commanders, utility providers, State Historic Preservation Offices, regulatory entities, and Navy leadership, regular management reviews of activities and metrics could help the Navy to measure progress toward attaining its capital investment goals for the shipyards. In addition, the Navy does not regularly provide information to its key DOD decision makers or to Congress on the progress it is making to reduce its facilities restoration and modernization backlog, improve the condition and configuration of the shipyards, recapitalize its capital equipment or reduce the effect that the condition of facilities and equipment have on maintenance delays. It is also not providing information on the challenges that prevent the shipyards from making such progress. Standards for Internal Control in the Federal Government notes that management should communicate quality information to external parties to help the agency achieve its goals and address risk. As a result, key decision makers and Congress lack the information they would need to assess the effectiveness of the Navy’s capital investment program at the shipyards. Regular management review and reporting on progress to decision makers is critical to ensure that all stakeholders are represented and held accountable for results, and that opportunities for adjustment are identified and used. Over the last few months, Navy officials and Congress have both taken steps to help address some of the problems outlined in this report. For example, Navy officials have said that they are beginning a more comprehensive review of the shipyards that will involve, among other things, improving cooperation among various stakeholders, developing capital investment metrics that are tied to shipyard performance, and changing the expectations around capital investment. However, this process is still in a very early stage and its timeframes are not yet developed. Until the Navy develops an approach that addresses these missing elements, the result will be a continuation of the same processes that have led the shipyards to their current state, which have already proved to be inadequate. To help address the naval shipyards’ capital investment challenges, House Report 115-200 accompanying a bill for the Fiscal Year 2018 National Defense Authorization Act directed the Secretary of the Navy to provide a report to the congressional defense committees, by March 1, 2018, on a comprehensive plan to address shortfalls in the public shipyard enterprise. Specifically, the House Report directs the Navy to, among other things, identify current infrastructure deficiencies at U.S. naval shipyards and prepare a detailed master plan for each shipyard that includes a list of specific infrastructure projects, scope of work, cost estimates, and schedule associated with the current and 30-year force structure projections. The Secretary of the Navy is also directed to identify the additional funding and any legislative authority needed to achieve an end state, as quickly as practicable, of the elimination of all ship maintenance backlogs and a return to predictable, sustainable, and affordable ship maintenance availabilities, including for the anticipated growth in Navy force structure. The shipyards are critical to maintaining the Navy’s readiness, but they are struggling to meet the Navy’s current needs with inadequate facilities, aging equipment, poorly configured drydocks, a growing restoration and modernization backlog, and an incomplete management approach for addressing these issues. The Navy recognizes these challenges, but to date the plans it has developed to address them have failed to gain ground against the poor condition of the facilities or the backlog of restoration and modernization needs. Continuing the current approach to capital investment seems unlikely to address the Navy’s struggles with lost operational days and drydock availability. Without the key characteristics of a results-oriented management approach for guiding, measuring, and tracking the progress of its capital investment program, the Navy cannot be certain that its capital investment efforts are providing the facilities and equipment needed to support the nuclear depot repair mission or that it is providing Congress with adequate information on which to base decisions about appropriations. The lack of a results- oriented management approach could lead to ineffective investment, resulting in missed opportunities for improvement that could affect shipyard cost and schedule performance. Further, if the shipyards are unable to maintain their facilities and equipment, they risk not being able to support Navy readiness over the long term. Because the shipyards are essential to maintaining readiness for the fleet of U.S. aircraft carriers and submarines and providing emergent repairs on an as-needed basis, ineffective management of capital investment in the shipyards can put Navy readiness at risk. On July 6, 2017, the House Armed Services Committee released report 115-200 accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018. The committee’s report directs the Secretary of the Navy to report to congressional defense committees on a comprehensive plan to address shortfalls in the public shipyards. Identified elements to be included in the plan include aspects of a results-oriented management approach—namely a comprehensive plan to address shortfalls in the public shipyard enterprise, end-state goals for the shipyards, and the funding needed to achieve this end-state—that we’ve identified as missing from the shipyard development plans we analyzed. We believe, however, that the Navy’s implementation of the House direction could be further strengthened. The Navy’s prior planning efforts have not fully established metrics for assessing progress, held regular management reviews with all relevant stakeholders to oversee the plans’ implementation and coordinate efforts, or reported on progress to key decision makers and Congress to inform resource decisions and provide accountability. Without fully incorporating these key elements, the Navy will not be positioned to guide the continued improvement of the condition and ability of the shipyards to meet the operational needs of the Navy. We are making the following three recommendations to the Navy. The Secretary of the Navy should do the following: Develop a comprehensive plan for shipyard capital investment that establishes the desired goal for the shipyards’ condition and capabilities; an estimate of the full costs to implement the plan, addressing all relevant requirements, external risk factors, and associated planning costs; and metrics for assessing progress toward meeting the goal that include measuring the effectiveness of capital investments. (Recommendation 1) Conduct regular management reviews that include all relevant stakeholders to oversee implementation of the plan, review metrics, assess the progress made toward the goal, and make adjustments, as necessary, to ensure that the goal is attained. (Recommendation 2) Provide regular reporting to key decision makers and Congress on the progress the shipyards are making to meet the goal of the comprehensive plan, along with any challenges that hinder that progress, such as cost. This may include reporting on progress to reduce their facilities restoration and modernization backlogs, improve the condition and configuration of the shipyards, and recapitalize capital equipment. (Recommendation 3) We provided a draft of this report to DOD for review and comment. In written comments on behalf of DOD provided by the Navy (reproduced in appendix III), DOD concurred with our recommendations and noted planned actions to address each recommendation. The Navy also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Navy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at [email protected] or (202) 512-5257. GAO staff who made key contributions to this report are listed in appendix IV. Prior to 1998, the Navy used two different funding mechanisms to fund its maintenance activities, depending on the level of maintenance a ship was receiving. The shipyards, which provided depot-level maintenance, were managed through a working capital fund. The working capital fund relied on payments from Navy forces—such as a ship—for services at a shipyard. This funding approach was intended to (1) generate sufficient resources to cover the full costs of the shipyards’ operations and (2) operate on a break-even basis over time—that is, neither make a gain nor incur a loss. The Department of Defense (DOD) directed Navy funds to the Navy forces seeking the repairs, and those forces acted as customers and paid for the maintenance they received from the shipyards (see figure 12). The Navy’s intermediate maintenance facilities, which provided intermediate-level maintenance, were funded via direct funding. Under the direct funding mechanism, the Navy allotted a portion of the money appropriated to it by Congress to fund the shipyards (see figure 12). In 1997, the Navy began to integrate its intermediate maintenance facilities with its shipyards in an attempt to improve workforce flexibility and reduce maintenance infrastructure. To achieve this, the Navy decided to standardize the funding mechanism. The Navy moved the shipyards to a direct funding mechanism, in part because the shipyard’s largest customer—the Pacific Fleet—was already funded in that manner and Navy officials believed that adopting a direct funding approach for shipyard maintenance would be simpler than changing the Fleet’s funding mechanism. Pearl Harbor Naval Shipyard shifted from working capital funding to direct funding on October 1, 1998 as a pilot program. The Navy conducted this pilot for two years and concluded that shipyard metrics generally either improved or stayed the same over that time. As a result, Puget Sound Naval Shipyard followed and changed funding mechanisms on October 1, 2003. Congress briefly paused the funding transition process for the East Coast shipyards in order to have the Navy provide a report on the effectiveness of the transition at Puget Sound. After the Navy submitted its report, the transition continued, and Portsmouth Naval Shipyard and Norfolk Naval Shipyard transitioned to the direct funding mechanism on October 1, 2006, the earliest transition date that Congress had allowed. In previous reports, both we and the Congressional Budget Office noted that there were potential advantages and disadvantages to both funding mechanisms. However, we did not suggest that the Navy should prefer one method over the other; rather, we recommended that the Navy take steps to ensure financial transparency at the shipyards after the transition to direct funding. Since the two remaining shipyards transitioned to a direct funding mechanism at the beginning of fiscal year 2007, capital investment at the shipyards has been higher than the 6 percent minimum mandated by Congress and has increased at about the same pace as overall shipyard funding. Shipyard officials have not identified any persistent problems as a result of the funding transition and were generally unable to identify any potential benefits of returning to a working capital fund mechanism. Between fiscal year 2007—the first year that all shipyards were completely supported by direct funding—and fiscal year 2017, total shipyard spending has increased by about 34 percent, in fiscal year 2016 constant dollars (see figure 13). Over that same time period, capital investment at the shipyards, in fiscal year 2016 constant dollars, has increased at about the same rate—35 percent—and has remained over the 6 percent minimum mandated by Congress (see figure 14). Navy officials at all four shipyards stated that they had not identified any significant concerns about the change from working capital to direct funding. For example, although the CBO report suggested that financial accountability might decrease under a direct funding mechanism, officials stated that working within the annual allotment of appropriated funds—as opposed to a constantly replenishing working capital fund—has forced shipyard officials to pay greater attention to costs than they had previously. In addition, officials at all four shipyards stated that after the initial transition was complete, the change in funding mechanisms had little effect on either the quality or the cost of the work being performed. Officials also noted that, under each funding mechanism, there are reasons that capital investment may remain relatively low. Shipyard capital investment (not including military construction) under the working capital mechanism was previously re-captured by the shipyards in the form of increased labor costs. They stated that, as a result, there was an incentive to keep capital investment low, so that the fleet would not defer maintenance and repair work in response to higher daily rates for labor. Under the direct funding mechanism, shipyard capital investment must compete with other projects Navy wide, which may also result in restraining investment. There are four Navy-operated shipyards: Norfolk Naval Shipyard, Portsmouth Naval Shipyard, Puget Sound Naval Shipyard and Intermediate Maintenance Facility, and Pearl Harbor Naval Shipyard and Intermediate Maintenance Facility. This appendix provides detailed information about each of these shipyards’ missions, issues, maintenance timeliness, facilities condition, capital investment, and facilities restoration and modernization backlog. In addition to the individual named above, key contributors to this report were Suzanne Wren (Assistant Director); Pat Donahue; Steve Donahue; Jaci Evans; James Lackey; Joanne Landesman; Amie Lesser; Felicia Lopez; Marc Molino; Leah Nash; Carol Petersen; Cody Raysinger; and John E. “Jet” Trubey. Agency Performance Plans: Examples of Practices That Can Improve Usefulness to Decisionmakers, GAO/GGD/AIMD-99-69 (Washington, D.C.: Feb. 26, 1999). Managing For Results: Enhancing Agency Use of Performance Information for Management Decision Making, GAO-05-927 (Washington, D.C.: Sept. 9, 2005). Depot Maintenance: Improvements Needed to Achieve Benefits from Consolidations and Funding Changes at Naval Shipyards, GAO-06-989 (Washington, D.C.: September 14, 2006). Results-Oriented Management: Strengthening Key Practices at FEMA and Interior Could Promote Greater Use of Performance Information, GAO-09-676 (Washington, D.C.: Aug. 17, 2009). Defense Infrastructure: Actions Needed to Improve the Navy’s Processes for Managing Public Shipyards’ Restoration and Modernization Needs, GAO-11-7 (Washington, D.C.: November 16, 2010). DOD’s 2010 Comprehensive Inventory Management Improvement Plan Addressed Statutory Requirements, But Faces Implementation Challenges, GAO-11-240R (Washington, D.C.: January 7, 2011). Managing For Results: Data-Driven Performance Reviews Show Promise But Agencies Should Explore How to Involve Other Relevant Agencies, GAO-13-228 (Washington, D.C.: Feb. 27, 2013). Standards for Internal Control in the Federal Government, GAO-14-704G. (Washington, D.C.: September 2014). Military Readiness: Progress and Challenges in Implementing the Navy’s Optimized Fleet Response Plan, GAO-16-466R (Washington, D.C.: May 2, 2016). Military Readiness: DOD’s Readiness Rebuilding Efforts May Be at Risk without a Comprehensive Plan, GAO-16-841 (Washington, D.C.: September 7, 2016). Department of Defense: Actions to Address Five Key Mission Challenges, GAO-17-369 (Washington, D.C.: June 15, 2017).
|
The Navy's four public shipyards—Norfolk Naval Shipyard, Portsmouth Naval Shipyard, Puget Sound Naval Shipyard and Intermediate Maintenance Facility, and Pearl Harbor Naval Shipyard and Intermediate Maintenance Facility—are critical to maintaining fleet readiness and supporting ongoing operations involving the Navy's nuclear-powered aircraft carriers and submarines. The condition of these facilities affects the readiness of the aircraft carrier and submarine fleets. Senate Report 114-255, accompanying a bill for the National Defense Authorization Act for Fiscal Year 2017, included a provision for GAO to examine the capital investment in and performance of the Navy's shipyards. GAO evaluated (1) the state of the naval shipyards' capital facilities and equipment, (2) the extent to which shipyard capital facilities and equipment support the Navy's operational needs, and (3) the extent to which the Navy's capital investment plans for facilities and equipment are addressing shipyard challenges. GAO reviewed data from fiscal years 2000 through 2016 on shipyard capital investment and performance and the age and condition of facilities and equipment; reviewed Navy guidance; visited the shipyards; and interviewed Navy and shipyard officials. Although the Navy committed to increased capital investment and developed an improvement plan in 2013, the shipyards' facilities and equipment remain in poor condition. GAO's analysis of Navy shipyard facilities data found that their overall physical condition remains poor. Navy data show that the cost of backlogged restoration and maintenance projects at the shipyards has grown by 41 percent over five years, to a Navy-estimated $4.86 billion, and will take at least 19 years (through fiscal year 2036) to clear. Similarly, a Navy analysis shows that the average age of shipyard capital equipment now exceeds its expected useful life. Partly as a result of their poor condition, the shipyards have not been fully meeting the Navy's operational needs. In fiscal years 2000 through 2016, inadequate facilities and equipment led to maintenance delays that contributed in part to more than 1,300 lost operational days—days when ships were unavailable for operations—for aircraft carriers and 12,500 lost operational days for submarines (see figure). The Navy estimates that it will be unable to conduct 73 of 218 maintenance periods over the next 23 fiscal years due to insufficient capacity and other deficiencies. Though the Navy has developed detailed plans for capital investment in facilities and equipment at the shipyards that attempt to prioritize their investment strategies, this approach does not fully address the shipyards' challenges, in part because the plans are missing key elements. Missing elements include analytically-based goals and metrics, a full identification of the shipyards' resource needs, regular management reviews of progress, and reporting on progress to key decision makers and Congress. For example, the Navy estimates that it will need at least $9.0 billion in capital investment over the next 12 fiscal years, but this estimate does not account for all expected costs, such as those for planning and modernizing the shipyards' utility infrastructure. Unless it adopts a comprehensive, results-oriented approach to addressing its capital investment needs, the Navy risks continued deterioration of its shipyards, hindering its ability to efficiently and effectively support Navy readiness over the long term.T GAO recommends that the Navy develop a comprehensive plan to guide shipyard capital investment, conduct regular management reviews, and report to Congress on progress in addressing the shipyards' needs. DOD concurred with all 3 recommendations.
|
A number of DOD organizations have issued orders outlining a phased drawdown from Iraq that meet the time frames set forth in the Security Agreement and presidential guidance, while being responsive to security conditions on the ground. Additionally, much has been accomplished to prepare for the retrograde of materiel from theater, including establishing processes to monitor, coordinate, and facilitate the flow of equipment out of Iraq. Furthermore, several organizations have been created to facilitate the retrograde of equipment and support unity of effort. To date, these efforts have contributed to MNF-I meeting or exceeding its targets for drawing down forces, retrograding equipment, and closing bases. While DOD has made significant progress executing the drawdown, there remains a large amount of personnel, equipment, and bases that must be drawn down within the established timelines. Headquarters, Department of the Army, MNF-I, and its subordinate command responsible for executing the drawdown in Iraq—Multi-National Corps-Iraq (MNC-I)—have issued plans outlining how the drawdown should be managed over time. These plans also endeavor to provide flexibility to commanders on the ground to conduct ongoing combat operations while simultaneously executing the drawdown. For example, in order to balance operational needs with the requirement to meet drawdown goals, commanders have the discretion to choose which of their equipment is no longer essential for ongoing operations, and can therefore be retrograded. Subsequent phases will see an increase in the flow of equipment retrograded from Iraq as the pace of the drawdown quickens. In support of these plans, processes have been established to monitor, coordinate, and facilitate the retrograde of equipment out of Iraq. As we reported in September 2008, MNF-I had processes in place to manage the retrograde of various types of equipment from Iraq. Since that time these processes have been refined and new elements have been established to improve them. For example, partly in response to our previous work, representatives from the Office of the Secretary of Defense’s Lean Six Sigma office conducted six reviews to optimize theater logistics, one of which focused on the process for retrograding equipment from Iraq. This work informed the development of a new data system, referred to as the Theater Provided Equipment Planner, which is intended to streamline the retrograde process by facilitating the issuance of disposition instructions for theater provided equipment while it is still in Iraq. In addition, a second new data system, Materiel Enterprise Non-Standard Equipment, has also been developed to facilitate the issuance of disposition instructions for non-standard equipment. In addition to refining the retrograde processes, several organizations have been created to oversee, synchronize, and ensure unity of effort for the retrograde of equipment from Iraq. In September 2008, GAO reported that the variety of organizations exercising influence over the retrograde process and the resulting lack of a unified or coordinated command structure was not consistent with joint doctrine, led to increased confusion and inefficiencies in the retrograde process, and inhibited the adoption of identified mitigation initiatives. To bolster unity of effort, MNF-I has created a Drawdown Fusion Center, the mission of which is to provide a strategic picture of drawdown operations, identify potential obstacles, address strategic issues, and assist in the development of policy and guidance related to several aspects of drawdown. To accomplish this mission, the Drawdown Fusion Center provides guidance on the disposition of materiel, monitors and advises on transportation options, tracks and monitors the capabilities of ports through which materiel is shipped, tracks logistics actions that impact disposition during drawdown, and acts as a focal point for all external agencies and the Government of Iraq in matters related to the drawdown. Assisting the Drawdown Fusion Center is U.S. Army Central’s Support Element-Iraq, a liaison element established to enhance synchronization and coordination among MNF-I; MNC-I; U.S. Army Central; Headquarters, Department of the Army; and Army Materiel Command. It also generates theater and Department of the Army disposition guidance for all forces and materiel redeploying and retrograding out of Iraq. Finally, the Department of the Army, with Army Materiel Command as the lead agency, created a Responsible Reset Task Force to facilitate the provision of disposition instructions for materiel retrograding out of Iraq and synchronize those instructions to facilitate the reset of Army equipment. DOD organizations reported that their efforts to reduce personnel, retrograde equipment, and close bases in the initial months of the drawdown have exceeded targets. First, according to the MNF-I commanding general, U.S. forces have already begun drawing down in Iraq without compromising security. For example, since May 2009, the number of U.S. servicemembers in Iraq has been reduced by 5,300. Furthermore, the MNF-I commander testified on September 30, 2009, that another 4,000 servicemembers will likely be drawn down in October 2009—earlier than originally planned—due to improvements in Anbar province. Second, as of August 2009, the Army reported that it has exceeded its target figure for the retrograde of rolling stock by 1,800 pieces. Finally, the Army has reported that as of August 2009, it had closed three more bases than originally planned. While DOD’s progress since May 2009 has exceeded its targets, a large amount of personnel, equipment, and bases remain to be drawn down within the established timelines. To meet the presidential target of reducing the number of U.S. forces in Iraq to 50,000 by August 31, 2010, MNF-I must reduce its forces by almost 60 percent by next summer. Furthermore, to meet the other targets established by MNF-I and the Army for August 2010, MNF-I must draw down 32 percent of its contractor personnel workforce, retrograde over 50 percent of its tracked and wheeled vehicles, and close 67 percent of its bases in Iraq. The remaining forces, contractor personnel, and equipment will have to be drawn down during the final 16 months, from September 2010 to December 31, 2011, during which time some of the largest bases in Iraq will also need to be closed or transferred to the Government of Iraq, a task the commanding general of MNF-I stated could take 9 to 10 months to complete. Figure 2 below illustrates the numbers of U.S. forces, contractor personnel, tracked and wheeled vehicles, and bases that have been drawn down since the initiation of drawdown; that must be drawn down by the August 31, 2010, change of mission date; and that must be drawn down before December 31, 2011. Efficient execution of the drawdown from Iraq may be complicated by crucial challenges regarding several unresolved issues that, if left unattended, may hinder MNF-I’s ability to meet the time frames set by the President, the Security Agreement, and MNF-I’s phased drawdown plan. These challenges include: contract services that have not been fully identified; potential costs and other concerns of transitioning key contracts that may outweigh potential benefits; longstanding shortages of contract oversight personnel; some key decisions about the disposition of equipment that have not yet been made; longstanding information technology system weaknesses; and a lack of precise visibility over some equipment. Some of these issues are outside MNF-I’s purview and require action by the Office of the Secretary of Defense and the Military Departments. Others require a coordinated effort by MNF-I, U.S. Army Central, and other DOD organizations supporting the drawdown effort. DOD has not fully defined the additional contracted services it will need to successfully execute the drawdown and support the remaining U.S. forces in Iraq. Experience has shown that requirements for contracted services will likely increase during the drawdown and joint guidance states that planners should work closely with contracting officers to determine the best approach for purchasing contract services. In Iraq, such efforts may be hampered because contracting officials in Iraq do not have full visibility over the approximately 52,000 contracts in theater. Officials at Joint Contracting Command-Iraq/Afghanistan, the organization responsible for coordinating contract support during the drawdown, are currently trying to get the full picture of operational contract support in Iraq. However, DOD lacks a centralized repository of the specific services available on the various contracts. For example, there are several contracts for trucking services currently being used to transport materiel in support of the drawdown, but planners may lack the details necessary to allocate these services efficiently as drawdown progresses. Joint guidance also calls for DOD to identify contracted support requirements as early as possible to ensure that the military receives contracted support at the right place, at the right time, and for the right price. In particular, for the drawdown of forces to occur according to the timelines, commanders will need to determine their contract support requirements and communicate these to contracting officers several months in advance. Although the MNF-I drawdown order anticipates an increase in its need for contracted services through September 1, 2010, as of July 2009 commanders had not identified the specific types and levels of contracted services they will need during the drawdown. For example, Army officials in Kuwait responsible for the retrograde of theater provided equipment had not defined the specific level of contracted services needed to perform functions such as repairing vehicles and requesting disposition instructions. In planning for the contractor presence needed during the final phase of the drawdown, MNF-I has made assumptions in the absence of defined requirements or full visibility over contracted services that may contribute to wasted resources and may hinder the timely execution of drawdown. Even though it anticipates an increase in contracted services needed during the drawdown, MNF-I has set a target for reducing the number of contractor personnel in Iraq to 75,000 by September 1, 2010. According to MNF-I officials, this target was based on the historic ratio of contractor personnel to servicemembers in Iraq, rather than requirements for contracted support. However, as GAO has previously reported, the drawdown of forces may create additional requirements for contracted support, and officials in Iraq have acknowledged that additional contractor personnel will be needed to provide services currently being provided by U.S. forces. For example, according to DOD, in the third quarter of fiscal year 2009 the number of armed private security contractors in Iraq went from 10,743 to 13,232, a 23 percent increase. This increase in private security contractors was due, in part, to an increased need for private security contractors as the military began drawing down its forces. Without identifying the level and types of contractor support needed to facilitate the drawdown, the actual number of contractor personnel needed remains unknown. Unless commands in theater define and communicate contract requirements with sufficient lead time, DOD risks not having the right contracted services in place to meet drawdown timelines and may resort to contracting methods that could cost the government more and that may be conducive to waste. Moreover, in determining the best means to meet commanders’ requirements, planners’ limited visibility over the range of contracted services available may contribute to decisions based on incomplete information, buying services that are already on contract, experiencing difficulty in enforcing priorities, and using limited contracting resources inefficiently. These outcomes may impact the timely execution of the drawdown. In 2006, we reported that a lack of visibility over contracted support negatively impacted MNF-I and MNC-I planning for base closure, among other things. The transition of key contracts that are scheduled to expire during the height of the drawdown presents the potential for the interruption of vital services. With the exception of LOGCAP, major contracted services in Iraq and Kuwait, including those for base and life support, convoy support, and equipment maintenance will soon reach their expiration date and are scheduled to be re-competed and re-awarded. If contracts are awarded to new contractors, outgoing and incoming contractors would be required to transition within a certain time period to continue vital services. If these contracts are re-awarded as scheduled, major contracted services in Iraq and Kuwait will be transitioning nearly simultaneously during the height of the drawdown, increasing the risk that services will be interrupted. According to a DOD lessons learned document, during the transition from LOGCAP III to LOGCAP IV in Kuwait which concluded in June 2009, the incoming contractor intended to hire at least 80 percent of the outgoing contractor’s personnel to begin providing services according to schedule. However, the outgoing contractor needed to retain its employees in order to continue to provide the services for which it was contracted. Although the incoming and outgoing contractors agreed to a protocol for transferring employees, poor execution at some sites led to staffing shortages and some service interruptions. To prevent similar service interruptions when other key contracts transition, it will be critical that DOD ensures that the outgoing contractor release personnel to the incoming contractor as anticipated. Furthermore, if contractor personnel choose not to transfer to the new contractor, the transition may result in greater-than-anticipated costs and delays as the contractor hires, screens, and deploys new personnel. Additionally, a lack of experienced personnel may also lead to service interruption. For example, according to the lessons learned document, a shortage of personnel available to operate large machinery in Kuwait forced officials to shut down operations critical to the drawdown. In addition, offices responsible for issuing credentials to employees were not prepared to handle the large volume of employees needing to obtain new badges, a situation exacerbated by the provision of inaccurate employee lists by the incoming contractor, resulting in a further disruption of services. As of July 2009, officials had not considered possible stresses on these offices that might occur during the upcoming, near-simultaneous contract transitions expected to occur during the drawdown. Finally, the outgoing contractor refused to provide, and in one case erased, data it was required to provide to the government. Government officials confirmed that these data would have facilitated a more efficient transition process. Contract management officials stated that challenges experienced during the transition from LOGCAP III to LOGCAP IV in Kuwait will likely be magnified during the upcoming contract transitions in Iraq, given the scope of contract transitions during the height of the drawdown. Even though LOGCAP III in Iraq does not expire during the drawdown time frame, DOD plans to undertake a complex transition to other means of contracted services despite concerns that the potential benefits of doing so may not be fully realized. According to DOD officials, MNF-I plans to transition base and life support and logistics functions currently provided by LOGCAP III to other contracts, including LOGCAP IV, the Air Force Contract Augmentation Program (AFCAP), and individual sustainment contracts with Iraqi contractors. However, unlike convoy support and maintenance contracts in Iraq and Kuwait, LOGCAP III does not expire until January 2012. A senior DOD official has stated that the rationale for making the transitions includes reducing the cost of base and life support services and mitigating the risks associated with relying on a single contractor to provide essential services. However, this official and others have raised concerns, indicating that these potential benefits may not be fully realized. For example, while cost savings may result from transitioning from LOGCAP III to other contracts, the senior DOD official with whom we spoke has conceded that costs may actually increase during the transition when both the incoming and outgoing contractors have duplicative personnel, including large transition teams. These costs may offset potential savings, in part because the new contracts would have, at most, about a year to realize their potential benefits, given the time needed to conduct the transition and the date that the Security Agreement states U.S. forces must be out of Iraq. Moreover, according to Army officials, there has been no formal cost-benefit analysis to weigh potential benefits against risks such as cost increases. In the absence of a robust cost-benefit analysis, the benefits of making the transition remain uncertain. The upcoming LOGCAP transition in Iraq will potentially increase the contract management and oversight responsibilities of the combat forces and impact the quality of service provided to the warfighter. Unit commanders, as customers of LOGCAP, play a significant role in the management and oversight of the LOGCAP contractor. For example, customers are required by the Army to periodically evaluate the contractor’s performance. Currently, units provide feedback to the contractor during monthly performance evaluation boards. Because the Army intends to award several task orders for services for base and logistics services—possibly to multiple contractors—it is possible that the number of monthly evaluations would increase for some commanders. Furthermore, while service disruptions like those experienced in Kuwait during the transition to LOGCAP IV between February and June of 2009 may have amounted to temporary inconveniences, in a continuously evolving environment like Iraq they have a greater potential to negatively impact ongoing operations. For example, according to a senior Defense Contract Management Agency official responsible for contract management and oversight in Iraq, there is concern about DOD’s plan to begin transitioning the theater transportation mission at the beginning of 2010, since it could require a new contractor to assume the mission just as the department undertakes a significant troop-level reduction that is planned for March-April 2010. Executing the rapid movement of troops and equipment out of Iraq will require significant truck assets. Transitioning the mission to a new contractor and requiring the new contractor to provide 23,000 trucks and crews could be daunting. Additionally, this official expressed concerns about the ability of a new LOGCAP IV contractor to quickly obtain the necessary staff to execute the mission if the transitions from LOGCAP III are done as currently planned. As we noted above, if an incoming contractor needs to hire a significant number of new personnel, service interruptions could result. For commanders in the field already tasked with conducting complex counterinsurgency operations and the drawdown of forces, among other responsibilities, it is important to know who is responsible for providing particular services. However, increasing the number of contracts in Iraq, as is planned to occur during the upcoming transition, may complicate commanders’ abilities to obtain essential contracted support. For example, under the current LOGCAP III contract in Iraq, commanders generally need to speak with one program manager to obtain the full range of contracted services. Under LOGCAP IV, however, services may be divided among multiple contractors for any particular location. As a result, the tasks of determining how to obtain essential services and correcting service problems may divert commanders’ limited resources from other responsibilities, which potentially increases risk to the mission. In addition, complex transitions to local contractors may impact the quality of services provided to the warfighter. For example, commanders in Iraq noted that some base and life support services being provided to U.S. forces through a newly transitioned contract managed by local sustainment contractors were not meeting the level of quality that U.S. forces had come to expect. We also found that a similar strategy in Kuwait resulted in service interruptions, including inefficiencies at key storage areas that led to expanses of disorderly materiel such as tires and cylinders. Should the upcoming LOGCAP transition in Iraq proceed as planned, the need for commanders to overcome challenges on which we have previously reported, such as inexperience in dealing with contractors, uncertainty regarding oversight responsibilities, and inability to dedicate resources for oversight, would be particularly acute. Limited oversight resources coupled with a projected significant increase in oversight demands during the LOGCAP transition in Iraq heightens the risk of waste. The successful transition from LOGCAP III to multiple base and life support contractors will require a large number of government oversight personnel, as the transition from LOGCAP III to LOGCAP IV in Kuwait demonstrated. However, overseeing the LOGCAP transition in Iraq would be an added responsibility for the Defense Contract Management Agency, which will continue to be responsible for the day-to-day management and administration of the LOGCAP III contractor, private security contracts, and other large contracts in Iraq. A Defense Contract Management Agency official expressed concern about conducting LOGCAP transitions at multiple locations simultaneously throughout Iraq because this would require a greater number of oversight personnel than a consecutive transition. For example, Defense Contract Management Agency officials cited insufficient numbers of property administrators available to transfer billions of dollars worth of property from LOGCAP III to one of several dozen possible contracts. These personnel shortages may delay the transfer of property, such as materiel handling equipment critical for loading, unloading, and moving containers which, in turn, may inhibit the timely retrograde of equipment from Iraq. Contract oversight requirements would further increase following the transition. Specifically, the Defense Contract Management Agency may go from overseeing one LOGCAP contractor to having to oversee three LOGCAP contractors and the AFCAP contractor. In addition, the contracts for specific base services that the Joint Contracting Command-Iraq/Afghanistan plans to award to Iraqi contractors could increase the workload for contracting officers from this command. Furthermore, as the number of contracts increase at an installation, commanders will be required to increase the number of personnel to ensure responsible oversight of contractor personnel. As a result, the number of personnel available for other operations will decrease. DOD’s longstanding challenge to provide an adequate number of trained oversight personnel in deployed locations will continue to plague the department as it proceeds through the drawdown. Since 2004 we have reported on DOD’s inability to provide an adequate number of oversight personnel in CENTCOM’s theater. Joint doctrine emphasizes the importance to commanders of ensuring that appropriate administration and oversight personnel are in place when using contractors. While MNF- I guidance recognizes the need to ensure oversight, DOD is likely to find it difficult to meet the oversight requirement as forces are withdrawn and the pool of personnel available for oversight decreases. Historically, as forces decrease, the need for contracted services increases. The oversight challenge in Iraq and Kuwait is exacerbated by the competing need to provide professional contract management and oversight personnel from agencies like the Defense Contract Management Agency to meet the increased oversight requirements in Afghanistan. DOD officials at all levels have expressed concern about the department’s ability to provide the required number of oversight personnel. For example, an Army unit in Kuwait with 32 government personnel that is currently providing oversight for more than 3,000 contractor personnel anticipates doubling its contractor workforce, but is not anticipating a concomitant increase in oversight personnel. The unit has identified the lack of oversight personnel as a significant concern to successfully moving equipment out of Kuwait. As we noted in several of our previous reports, having the right people with the right skills to oversee contractor performance is crucial to ensuring that DOD receives the best value for the billions of dollars spent each year on contractor-provided services supporting forces deployed to Iraq. For example, we reported in 2004 that the Defense Contract Management Agency could not account for $2 million worth of tools purchased using the AFCAP contract, in part because of a lack of contract management and oversight personnel in CENTCOM’s theater. In January 2008, we reported that the Army did not have adequate staff to conduct oversight of an equipment maintenance contract in Kuwait. We have found in the past that, as a result of the vacant oversight positions, the Army was unable to fully meet the oversight mission including fully monitoring contractor performance. In that same report we noted that poor contractor performance resulted in the Army spending $4.2 million to rework items that were presented to the Army as meeting contract standards but failed Army inspection. We have also noted that an inadequate number of oversight personnel results in some contracts receiving insufficient oversight. For example, in 2008 we reported that the Army assigned seven contracting officer’s technical representatives to provide oversight for about 8,300 linguists in 120 locations across Iraq and Afghanistan. In one case, a single oversight person was responsible for linguists stationed at more than 40 different locations spread throughout the theater of operations. Officials responsible for the contract agreed that there were not enough contracting officer’s technical representatives to effectively oversee the contract. Having too few contract oversight personnel precludes DOD from being able to obtain reasonable assurance that contractors are meeting their contract requirements at every location where the work is being performed. Without adequate contract oversight personnel in Iraq and Kuwait during the drawdown, DOD risks not receiving the level and quality of service it needs to effectively and efficiently meet the goals of the drawdown. MNF-I’s execution of the drawdown from Iraq in accordance with established timelines depends on its obtaining clear guidance as to what equipment can and will be provided to the Government of Iraq and what will be retained by the U.S. military; identification of the mechanisms that are to be used to transfer equipment to the Government of Iraq; determinations of what will be done with certain types of non-standard equipment, such as Mine Resistant Ambush Protected vehicles (MRAP); and resolution of other decisions related to the Army’s modernization and reset plans. DOD plans to transfer military equipment to the Government of Iraq in order to achieve U.S. objectives in Iraq, but decisions still need to be made by DOD on what can and will be transferred to the Government of Iraq, contributing to planning uncertainty. Multi-National Security Transition Command-Iraq, an MNF-I subordinate command responsible for training and equipping the Iraqi security forces, has prepared a list of equipment it believes will enable the Government of Iraq to provide for its own security after U.S. forces have left Iraq. This list comprises about 1.5 percent of the estimated 3.3 million pieces of equipment in Iraq, with a projected value of about $600 million. This list is currently undergoing progressively higher levels of review within DOD, for potential approval by the Military Department Secretaries and the Secretary of Defense. Until this list is approved, and an appropriate transfer mechanism determined, the equipment that will be transferred to the Government of Iraq remains uncertain. Currently, no decision has been made as to what authorities will be used to transfer these items to the Government of Iraq. While certain authorities exist that may permit the transfer of excess defense articles, DOD has also requested additional authority to transfer non-excess defense articles. Section 1234 of the National Defense Authorization Act for Fiscal Year 2010 provides an additional authority, requested by the Department of Defense, under which the Secretary of Defense, with the concurrence of the Secretary of State, may transfer certain equipment to the Government of Iraq without the Military Departments declaring it excess to their needs. Because this provision does not specify a mechanism for reimbursing the Military Departments for the transfer of non-excess equipment, the loss of which may affect unit readiness, senior Army officials expressed concern about it prior to its passage, and the conference report accompanying the Act urged the Secretary of Defense to develop a plan to reimburse the Military Departments for such items. In addition, other DOD officials expressed strong reservations about section 1234 prior to its passage, arguing that existing authorities, such as those which underpin Foreign Military Sales, are sufficient to transfer U.S. military equipment to the Government of Iraq, but are not fully understood within the department. Clarification of authorities to be used for transferring equipment to the Government of Iraq will help facilitate decisions on which equipment will be transferred, and will assist in ensuring that DOD will meet its stated timelines. The complexity of issues surrounding transfer authorities has already presented obstacles to transferring equipment to the Government of Iraq. For example, beginning in May 2009, MNC-I undertook an initiative to turn over the Ibn Sina hospital, located in the International Zone, to the Government of Iraq as a fully equipped, fully operational hospital. However, 100 of the approximately 9,800 pieces of equipment in the hospital, such as intensive care unit beds, trauma centers, and patient vital signs monitoring equipment, were ineligible for transfer because, according to Army officials, the Army could not declare them as excess to the needs of the Army. As a result, officials had to seek alternate means to transfer or sell the remaining pieces of equipment necessary to outfit the hospital. Ultimately, the hospital was transferred to the Government of Iraq on schedule. However, Army officials stated that after exhausting all legal options for transferring or donating the remaining equipment, the hospital was transferred without these 100 pieces of important equipment. According to the Army, disposition for nearly all currently identified non- standard equipment in Iraq has been determined, but all items needing disposition have not yet been identified. Non-standard equipment is mainly theater provided equipment that has been issued to units that is not listed on their modified table of organization and equipment. Non-standard equipment includes a wide range of items such as construction equipment, materiel handling equipment, flat screen televisions, certain types of radios, and MRAPs. To facilitate the retrograde of non-standard equipment, the Army is implementing a new process in which the Life Cycle Management Commands are cataloguing all types of non-standard equipment in Iraq for entry into a new database. The Army then determines the location to which each type of item will be shipped upon retrograde from Iraq. Army officials state that they have determined disposition for the majority of types of non-standard equipment already identified in Iraq. However, these officials also state that additional types of non-standard equipment are still being entered into the database as efforts to gain accountability over non-standard equipment continue. Until this effort is complete, the disposition of some types of non-standard equipment in Iraq may be delayed. Decisions on the disposition of MRAPs also have not been finalized, and DOD faces challenges in retrograding the large number of these vehicles that remain in Iraq. MRAPs are a unique type of non-standard equipment that were initially procured specifically for use in Iraq to better protect servicemembers from improvised explosive devices. As the drawdown progresses, DOD officials acknowledge that most of the MRAPs retrograded from Iraq will return to the United States, and that only some of these vehicles are suitable for use in Afghanistan. According to Army officials, the Army, which manages most of the MRAP fleet, has issued preliminary disposition instructions for MRAPs to be retrograded from Iraq, but service-wide requirements for MRAPs have not yet been finalized. Moreover, although in January 2008, DOD designated the Red River Army Depot and Marine Corps Logistics Command bases in Albany and Barstow as the depots that would repair MRAPs in the United States, Headquarters, Department of the Army only recently issued a message directing the shipment of 200 MRAPs from Kuwait to Red River Army Depot as part of an MRAP Reset Repair Pilot Program. To date, all MRAPs retrograded from Iraq have passed through the MRAP Sustainment Facility in Kuwait for repair. However, at the time of our July 2009 visit to the CENTCOM area of operations, this facility could process only 20 MRAPs per week, contributing to a build-up of nearly 900 MRAPs in a retrograde lot in Kuwait. The officials who manage this lot stated that it was nearing full capacity for holding MRAPs. However, data provided by U.S. Army Central indicate that DOD’s capacity to process and ship MRAPs out of Kuwait exceeded the relatively few numbers of additional vehicles that left Iraq since our visit, decreasing the total number of MRAPs that are sitting in the retrograde lot to under 800 as of October 2009. Nevertheless, according to U.S. Army Central, over 8,000 MRAPs remain in Iraq. To remove MRAPs from Iraq according to the timeline set by the Security Agreement, the pace of their retrograde will need to significantly increase as the drawdown progresses, which heightens the potential for bottlenecks. The disposition of equipment in theater may also be affected by other decisions that have not been made related to the Army’s future composition and equipment reset needs. For example, the Army has not decided what equipment and how much of each type of equipment will be transferred to Army Prepositioned Stocks and Theater Sustainment Stocks. Also, the Army is currently drafting an “Equipping White Paper” that describes how the Army plans to allocate equipment in accordance with future force structure designs. For example, Army officials stated that they are considering changing one or more heavy brigade combat teams into Stryker brigade combat teams. Other factors also add uncertainty to the disposition of equipment. For example, while the Army has taken steps to streamline the reset induction process for equipment in Iraq, disposition for reset depends on when the equipment is retrograded from Iraq and the condition of the equipment. In addition, the extent to which equipment may be stored in Kuwait is unclear. Specifically, some officials from the Office of the Secretary of Defense told us that some equipment may be stored at depots or in Kuwait while decisions about disposition are made, while Army officials told us that the Army has no plans to store equipment in Kuwait. Finally, decisions have not been finalized on what additional equipment will be transferred from Iraq to Afghanistan. Weaknesses in data systems used to retrograde equipment from Iraq that we cited in our September 2008 report remain uncorrected, and a new problem has surfaced. In our September 2008 report, we noted that when theater provided equipment reached Kuwait, the 401st Army Field Support Brigade, which received the equipment, had to undertake two concurrent manual data entry processes in separate logistics information systems to establish accountability and visibility for the equipment. We also reported that the process for requesting disposition instructions was lengthy and involved sending spreadsheets populated with equipment data from Kuwait to the appropriate Life Cycle Management Command in the United States and then back to Kuwait. According to DOD officials we interviewed in Iraq and Kuwait in July 2009, the manual manipulation of data and extensive reliance on spreadsheets still occurs while other DOD officials stated that any problems that delay equipment from being retrograded can be problematic given the rapid pace of the drawdown. In addition, during our recent field visits we identified another data system problem that prevented the timely issuance of disposition instructions for equipment identified for retrograde from Iraq. Specifically, due to a data corruption error that occurs during data transfer between two legacy Army systems, Army officials in Kuwait were unable to issue orders to move the equipment to its designated destination. Officials stated that this problem had a negative effect on their ability to retrograde equipment, and officials in the United States and Kuwait worked together during regularly scheduled meetings to discuss issues delaying the transmission of these instructions. To fix the problem in the system, programmers had to implement manual fixes for each individual set of disposition instructions. According to Army officials, a solution to correct the data corruption error has been implemented since our visit. However, we have not been able to validate this claim and, according to Army officials, similar problems with legacy systems occur regularly. Higher projected flows of theater provided equipment during later phases of the drawdown may also put the timely issuance of disposition instructions at risk. We reported in 2008 that receipt of disposition instructions for some rolling stock took anywhere from three to nine months, resulting in equipment being held in Kuwait awaiting disposition instructions. Although officials told us during our July 2009 visit to Kuwait that this situation had improved, the data used to support that claim may be unreliable. With increased flows of equipment, inefficiency resulting from the reliance on the entry of data by hand will be magnified. The higher volume of equipment requiring disposition instructions may stress the manual processes currently being used, thereby increasing the risk that more time will be necessary to request and receive disposition instructions, which may again cause equipment to sit idle in Kuwait. While the Theater Provided Equipment Planner may improve retrograde process efficiency by automating the issuance of disposition instructions that would otherwise need to be issued through the existing manual process, the extent to which items will be retrograded using the new system, especially as the volume of equipment being retrograded increases during later phases of the drawdown, is unclear. In addition, the increased volume of equipment projected for the later phases of the drawdown will require additional contractor personnel to make the manual entries made necessary by the system incompatibility issues. The execution of the drawdown may also be affected by the lack of a complete and accurate inventory of three broad types of equipment. These three types of equipment include contractor acquired property, non- standard equipment, and shipping containers. According to Army data, these three types of equipment comprise 28 percent of the total DOD property in Iraq. To facilitate a more complete and accurate record of equipment in Iraq, MNF-I required its subordinate units to complete a 100 percent inventory of their equipment, identify excess equipment that can be immediately retrograded, and account for previously undocumented equipment by June 27, 2009. Undocumented equipment, however, continues to be identified and added to the inventory. According to MNF-I guidance, the command’s ability to meet drawdown requirements and timelines depends upon establishing an accurate and complete inventory of the amount and types of equipment that will have to be retrograded from Iraq. In that vein, MNF-I ordered a 100 percent inventory of all U.S. government owned equipment in Iraq. Overall, DOD officials stated that property accountability has improved in Iraq since 2006, especially with regard to theater provided equipment. The guidance calling for completion of an inventory by June 27, 2009 was intended to account for undocumented items. When these previously undocumented items are entered onto property books, commanders become accountable for them. The intent is to facilitate drawdown planning and execution by providing an incentive for commanders to take action on previously undocumented items that otherwise may not be factored into the retrograde plans. However, although MNC-I states that the inventory is complete, previously undocumented equipment continues to be found every month. Until all undocumented equipment is included in the inventory, DOD’s information on the number of items requiring retrograde remains incomplete, which adds risk to meeting the drawdown timelines. During our visit to the CENTCOM area of operations in July 2009, officials in Iraq and Kuwait stated that, of all categories of equipment, they had the least visibility over contractor acquired property. Army officials stated, however, that as of October 2009, this situation had improved. While contractors are typically required under the terms of their contract to maintain property accountability over this equipment, there is no standardized process for doing so, limiting MNF-I’s and U.S. Army Central’s accountability and visibility over this equipment. During the drawdown, accountability of contractor acquired property is important to ensure the efficient allocation of the transportation assets used to retrograde this equipment. U.S. Army Central officials also noted that they lack full accountability and visibility over non-standard equipment in Iraq, adding another potential risk to their ability to efficiently retrograde this equipment out of Iraq. Army officials have estimated that there could be as many as 360,000 pieces of non-standard equipment in Iraq, but concede that they have low confidence in property accountability for non-standard equipment. Moreover, Army and U.S. Army Central officials note that obtaining an accurate inventory of non-standard equipment is complicated by the fact that many of these items have multiple identification numbers and that commanders have significant flexibility in accounting for this equipment. For example, a piece of non-standard equipment that is valued at greater than $5,000 must be recorded on a military unit’s property book, but after the value of that item depreciates below the $5,000 threshold, it is left to the individual commander’s discretion whether to continue recording the property. Not knowing the precise amount of non-standard equipment in Iraq that will need to be retrograded contributes to planning uncertainty for the organizations tasked with executing the drawdown, and may put at risk the ability to position transportation assets and personnel to manage the many aspects of the retrograde process in time to facilitate a steady flow of equipment from Iraq. Another factor compounding planning uncertainty is the lack of an accurate accounting of the quantity and serviceability of shipping containers in Iraq. Containers are unique in that not only are they items that have to be retrograded from Iraq, they are also a primary vehicle for shipping other types of equipment out of Iraq. According to U.S. Army Central officials, the data system in place to track containers is inaccurate and incomplete because, among other factors, it must be manually updated every time a container arrives at or leaves a specific location. Reports based the data from this system indicate that the system is at best 25 percent accurate. Furthermore, updates to the location and status of containers may not occur routinely because of personnel shortages. For example, according to officials in charge of container management, 200 containers listed as located in Iraq were, in fact, in Afghanistan. Moreover, in addition to inaccurate data on the number of containers and their locations, officials also lack data on the serviceability of containers. In an effort to rectify this problem, MNC-I issued an order directing a 100 percent inventory of containers, including instructions for reporting the serviceability of the containers. Subsequent reports indicate that approximately 54,000 containers had been physically inventoried as of August 2009, which was almost 25,000 fewer than the number of containers in the data system. Out of these containers entered in the data system, the location of over 7,000 could not be verified and the serviceability of 39 percent remained unknown. Moreover, many containers in Iraq are being used for storage, office space, and living quarters, among other purposes, yet are not documented as such, and may not immediately be available for retrograde. Due to limited container accountability, MNF-I and U.S. Army Central’s ability to plan for the steady flow of equipment out of Iraq necessary to meet the drawdown timelines may be at risk. As I have stated today, much has been done in Iraq and Kuwait to facilitate the drawdown effort. However, the effective execution of the drawdown may be compromised by several complex challenges: notably, identification of contractor requirements needed for the drawdown, and development of plans to address the challenges created by key contract transitions and to mitigate the risk of waste caused by an inadequate number of trained oversight personnel that would aid the successful management of contract services. Additionally, the effective execution of the drawdown is dependent upon decisions about what equipment can and will be transferred to the Government of Iraq, the clear establishment of transfer mechanisms, and final decisions on the disposition of non- standard equipment. Moreover, longstanding data information system incompatibility issues and a less-than-comprehensive inventory of some types of equipment in Iraq may hamper the drawdown. For further information about this statement, please contact William M. Solis (202) 512-8365 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Vincent Balloon, Carolynn Cavanaugh, Carole Coffey, Timothy DiNapoli, Laurier Fish, Walker Fullerton, Guy LoFaro, Greg Marchand, Jim Melton, Emily Norman, Jason Pogacnik, David Schmitt, Cheryl Weissman, Gerald Winterlin, and Gwyneth Woolwine. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The United States and the Government of Iraq have signed a Security Agreement calling for the drawdown of U.S. forces from Iraq. Predicated on that agreement and U.S. Presidential guidance, Multi-National Force-Iraq (MNF-I) has issued a plan for the reduction of forces to 50,000 U.S. troops by August 31, 2010, and a complete withdrawal of forces by the end of 2011. The drawdown from Iraq includes the withdrawal of approximately 128,700 U.S. troops, over 115,000 contractor personnel, the closure or transfer of 295 bases, and the retrograde of over 3.3 million pieces of equipment. Today's statement will focus on (1) the extent to which the Department of Defense (DOD) has planned for the drawdown in accordance with timelines set by the Security Agreement and presidential directive; and (2) factors that may impact the efficient execution of the drawdown in accordance with established timelines. This statement is based on GAO's review and analysis of DOD and MNF-I plans, and on interviews GAO staff members conducted with DOD officials in the United States, Kuwait, and Iraq. It also draws from GAO's extensive body of issued work on Iraq and drawdown-related issues. While DOD's primary focus remains on executing combat missions and supporting the warfighters in Iraq, several DOD organizations have issued coordinated plans for the execution of the drawdown within designated time frames. In support of these plans, processes have been established to monitor, coordinate, and facilitate the retrograde of equipment from Iraq. DOD's organizations have reported that their efforts to reduce personnel, retrograde equipment, and close bases have thus far exceeded targets; since May 2009, for example, DOD reports that the number of U.S. servicemembers in Iraq has been reduced by 5,300, and another 4,000 are expected to be drawn down in October. However, many more personnel, equipment items, and bases remain to be drawn down. For U.S. forces, contractor personnel, selected vehicles, and bases, the graphic below depicts drawdown progress since May 2009, as well as what remains to be drawn down by August 31, 2010 and December 31, 2011, respectively. Efficient execution of the drawdown from Iraq, however, may be complicated by crucial challenges that, if left unattended, may hinder MNF-I's ability to meet the time frames set by the President, the Security Agreement, and MNF-I's phased drawdown plan. First, DOD has yet to fully determine its future needs for contracted services. Second, the potential costs and other concerns of transitioning key contracts may outweigh potential benefits. Third, DOD lacks sufficient numbers of contract oversight personnel. Fourth, key decisions about the disposition of some equipment have yet to be made. Fifth, there are longstanding incompatibility issues among the information technology systems that may undermine the equipment retrograde process. And sixth, DOD lacks precise visibility over its inventory of some equipment and shipping containers. While much has been done to facilitate the drawdown effort, the efficient execution of the drawdown will depend on DOD's ability to mitigate these challenges. We will continue to assess DOD's progress in executing the drawdown from Iraq and plan to issue a report.
|
To emphasize fair and responsible use of the False Claims Act, DOJ issued “Guidance on the Use of the False Claims Act in Civil Health Care Matters” on June 3, 1998. The guidance instructs DOJ attorneys and U.S. Attorneys to determine, before they allege violations of the act, that the facts and the law sufficiently establish that the claimant knowingly submitted false claims. The guidance covers all civil health care matters and has specific provisions to address national initiatives. DOJ defines these initiatives as nationwide investigations stemming from an analysis of national claims data, indicating that numerous, similarly situated health care providers have engaged in similar conduct to improperly bill government health care programs. Prior to alleging a violation of the act in connection with a national initiative, attorneys shall, in general, use contact letters to notify a provider of a potential liability and give the provider an opportunity to respond before a demand for payment may be made. The guidance contains other safeguards to ensure the fair treatment of hospitals. For example, U.S. Attorneys’ Offices must consider alternative remedies to the use of the False Claims Act, including administrative remedies such as recoupment of overpayments, program exclusions, and other civil monetary penalties. In addition, they must also consider a provider’s ability to pay; the effect on the community served by the provider, particularly for rural and community hospitals; and the extent of provider cooperation in the matter. The guidance also requires the formation of a working group to coordinate each national initiative. The working groups, composed of DOJ attorneys and Assistant U.S. Attorneys with expertise in health care fraud control, must develop “initiative-specific guidance” to provide direction and support to the U.S. Attorneys’ Offices that are participating in the initiative. For example, working groups may prepare a legal analysis of pertinent issues, provide a summary of Medicare claims data indicating potentially significant billing errors, and develop an investigative plan. The working groups track the participating offices’ progress and respond to their questions as each initiative proceeds. Ongoing contacts can help assure the working group that the offices are following the guidance. The two national initiatives that currently have the most active investigations are the PPS Transfer and Pneumonia Upcoding projects. The PPS Transfer initiative was developed from a series of audits and joint recovery projects by the Department of Health and Human Services Office of Inspector General (HHS-OIG), the Health Care Financing Administration (HCFA)—the agency within HHS that administers the Medicare program—DOJ, and the claims processing contractors to identify improperly coded transfers and recover overpayments from hospitals. The Pneumonia Upcoding initiative targets inappropriate coding of inpatient hospital claims for a relatively rare bacterial form of the disease that is more costly to treat—approximately $2,500 more per claim—than the more common forms of pneumonia. The initiative assesses whether hospitals submitted claims for a more complex form of the disease than was supported by the patient’s medical records. This is the fourth report we have issued regarding DOJ’s implementation of its False Claims Act guidance and its efforts to oversee compliance. In February 1999, we issued an early status report on DOJ’s initial efforts to implement the guidance. In August 1999, we reported that DOJ’s process for reviewing implementation of the guidance appeared superficial and that U.S. Attorneys were not consistent in their application of the guidance. However, in March 2000, we reported that DOJ had taken steps to improve compliance with its False Claims Act guidance. We noted that DOJ had strengthened its oversight of U.S. Attorneys’ Offices and that the offices that we had previously found to be slow in implementing the guidance appeared to have addressed their shortcomings. We also found that the working groups were providing legal and factual material on each national initiative for U.S. Attorneys’ Offices to consult prior to contacting hospitals about potential False Claims Act liability. DOJ has demonstrated its continued commitment to promoting the importance of compliance with the False Claims Act guidance at its U.S. Attorneys’ Offices. In response to our prior recommendations, DOJ revamped its process for periodically evaluating the compliance of these offices and instituted an annual compliance certification requirement for all U.S. Attorneys’ Offices participating in national initiatives. These steps have helped to encourage compliance. We found that DOJ’s periodic evaluations of the U.S. Attorneys’ Offices now incorporate a more substantive examination of compliance with the guidance. The review process, which was instituted in February 1999, initially contained only one interview question relating to the guidance, but DOJ has since expanded its evaluation procedure as it relates to the guidance. By April 2000 the review included a number of questions devoted to the guidance in both the previsit questionnaires and the interviews conducted during on-site visits. Respondents must now describe in detail the activities and procedures each office has in place to ensure that the attorneys are informed of the guidance and that the office is in compliance. Of the 16 full evaluations that took place between April 2000 when the evaluation process was expanded and the end of the calendar year, none resulted in a determination that an office was out of compliance with the guidance. Through our discussions with DOJ officials and our review of relevant materials, we were able to verify that the evaluations provide an effective mechanism for identifying and documenting areas of concern and potential vulnerability, such as the need for additional information on the guidance for attorneys. No such findings were made during reviews of U.S. Attorneys’ Offices currently participating in a national initiative. U.S. Attorneys’ Offices must respond to weaknesses identified in the review, and the Executive Office for U.S. Attorneys subsequently verifies that, if needed, corrective action is taken. Our review showed that, when weaknesses were identified, this process was followed and implementation of corrective actions was monitored. DOJ’s annual requirement that all U.S. Attorneys’ Offices involved in national civil health care fraud initiatives certify their compliance with the guidance appears to have promoted compliance at the offices we visited. DOJ officials told us that all U.S. Attorneys’ Offices participating in civil health care matters had attested to their compliance for the period ending December 31, 2000. Although DOJ has not required offices to document their compliance with the guidance as part of the certification process, the offices we visited had either documented their compliance in individual case files or instituted a review process under the direction of their office’s Civil Chief. For example, every closed case file we reviewed in one office contained a certification that the case had been conducted in accordance with the guidance. Based on our review of the supporting documentation in these case files, we found no basis to dispute the office’s compliance certifications. Another office directed an attorney not involved in the national initiatives to review case files for evidence of compliance. The attorney then prepared a report for the review and approval of the Civil Chief prior to completing the annual compliance certification. We found this report provided detailed support for the attorney’s conclusion that the cases were handled in a manner consistent with the guidance. Based on our analysis of working group materials and review of case files at four offices, we believe that DOJ is following its guidance as it pursues the PPS Transfer and Pneumonia initiatives. The working groups have prepared material for the U.S. Attorneys’ Offices on the legal and factual bases for contacting hospitals about potential False Claims Act liability for each initiative. In addition, the working groups have prepared model contact letters and other documents to ensure that hospitals are contacted in a manner consistent with the guidance. The U.S. Attorneys’ Offices we visited consulted the working group materials and conducted independent investigations so that their settlement terms could be adjusted to reflect each hospital’s situation. Although the AHA and some state hospital association representatives remain concerned that the False Claims Act is inappropriately being applied to inadvertent billing errors, they did not identify specific instances where a particular U.S. Attorney’s Office has acted inconsistently with the guidance in either national initiative. The working groups prepared extensive initiative-specific guidance and memoranda outlining the relevant legal and regulatory requirements underlying the initiatives. After consulting with the HHS-OIG and HCFA, the working groups analyzed national and hospital-specific claims data. The U.S. Attorneys’ Offices were then able to use these data as a starting point to begin investigating whether specific hospitals had knowingly submitted false claims. The PPS Transfer working group conferred with the HHS-OIG regarding its prior audits of PPS hospitals. Similarly, the Pneumonia Upcoding working group obtained extracts of national inpatient claims data from HCFA and reviewed these data with HCFA specialists, the HHS-OIG, and an independent consultant to ensure their validity. We found that in addition to providing resources and coordinating the initiatives, the working groups play an active role in monitoring the progress of the offices participating in the initiatives. We were able to verify that participating districts consult with working group members on an ongoing basis throughout the development and settlement of their cases. This exchange of information allows the working groups to assess compliance with the guidance. Our review of case files at the four offices we visited suggests the interactions between these offices and the hospitals they investigated were consistent with the guidance. In reviewing records relating to initial contacts from the U.S. Attorneys’ Offices and hospitals, the investigations, and settlements, we observed that the offices were attentive to hospitals’ individual circumstances and that they varied their actions accordingly, as required by the guidance. For example, our review of correspondence showed that the contact letters used by these four offices were based on the model letters distributed by the working groups. Consistent with the guidance, the letters we reviewed informed hospitals of potential False Claims Act liabilities but did not make demands for payment and gave hospitals the opportunity to meet to discuss the matters further. We found that U.S. Attorneys’ Offices we visited did not pursue hospitals identified by the working group data as a matter of course. Instead, the offices conducted their own reviews of each hospital’s billing patterns and circumstances, as the guidance requires. These efforts sometimes revealed other explanations for erroneous billing at specific hospitals, and the hospitals repaid the overpayments with no imposition of damages or administrative sanctions. For example, one Assistant U.S. Attorney reviewed the data supplied by the PPS Transfer working group and found that, while the billing patterns for two hospitals indicated incorrectly coded cases, they did not necessarily reflect “knowing” behavior, as defined by the False Claims Act. Without initiating a formal investigation by sending a contact letter, the office held discussions with management at both hospitals to solicit possible explanations that might account for these billing aberrations. These interviews revealed that the hospitals had not been informed that the facility they were transferring patients to had changed its payment status. The hospitals thought they were discharging patients to a rehabilitation facility—in which case they would have been entitled to receive the full inpatient payment amount—when in fact the facility had become a PPS hospital and the partial-payment rule applied. Because the Assistant U.S. Attorney determined that the improper payments were not knowingly submitted, there was no potential violation of the False Claims Act and no contact letter was sent. In another instance, a study conducted for a U.S. Attorney’s Office indicated that the claims data for one hospital reflected improper billing. The office’s investigation determined that the hospital’s inaccurate coding was not the result of deliberate action or recklessness on the part of the hospital, but rather the mistakes of one individual member of the coding staff. This hospital refunded the excess reimbursements to the Medicare program and was not assessed damages. Offices we visited routinely considered unique factors surrounding the case as well as each hospital’s circumstances during the settlement process. In one case, an office settled for lower damages because the hospital had voluntarily disclosed that it had a billing problem. The hospital’s cost of performing its own audit was deducted from the settlement amount. In another case, the office reduced its proposed settlement to reflect the hospital’s cooperation in voluntarily conducting a self-audit as well as its unique status as the only provider in an area of the state. While working groups are not authorized to approve or disapprove settlement agreements, we found that the U.S. Attorneys’ Offices we visited kept them informed of the status of cases nearing settlement and shared proposed settlement agreements with them. For example, one proposed settlement was accompanied by a detailed analysis documenting how it was handled in accordance with each element of the guidance. Our review of closed cases also showed that the working groups were given an opportunity to comment on the proposed settlement before the agreements were finalized. During our review, we contacted representatives from several state hospital associations and the AHA. Most continued to voice concerns over the appropriateness of DOJ’s national initiatives. They told us that they generally believe that the vast majority of overpayments made to hospitals reflect the complexity of the Medicare billing system and are not an attempt to defraud the program. Therefore, they suggested that these matters be handled by fiscal intermediaries without the threat of harsh penalties. Hospital association representatives also raised several concerns. They questioned the use of national normative claims data to target hospitals on the basis that this process fails to take into account each hospital’s unique circumstances—such as patient demographics—which may account for discrepancies between a hospital’s billing pattern and broader, national trends. This concern is particularly applicable to the Pneumonia Upcoding project, in which hospitals are identified for review following a comparison of hospital and national claims data. While we did not independently analyze the methods used to prepare the claims data for the pneumonia project, information on each hospital’s specific billing pattern for complex pneumonia and the national norm for that diagnosis was presented in each of the contact letters we saw. During our site visits we saw evidence that the claims data were used as the starting point for further investigation. AHA representatives expressed concern that the data used to select hospitals for the investigation of allegedly upcoded pneumonia claims were drawn from a different time period than the period used as the national norm for comparison purposes. DOJ officials stated that this was not the case. Furthermore, the claims data that DOJ relied upon were obtained from the HHS-OIG, and HCFA and an independent claims review consultant were involved with extracting and analyzing the pneumonia claims. In addition, AHA representatives stated that DOJ is engaging in other projects that have national implications but have not been recognized as national initiatives. DOJ officials explained that they may have multidistrict initiatives underway involving subjects under investigation in multiple jurisdictions, but that these projects do not meet DOJ’s definition of a national initiative. DOJ has instituted written guidelines specifically addressing the proper coordination of multidistrict investigations, and, like all civil health care fraud matters, multidistrict initiatives must be conducted in accordance with the guidance. Our work for this report involved no assessment of compliance with the guidance in such cases. Another concern raised by hospital association representatives was that DOJ often included burdensome corporate integrity agreements in national initiative settlements at the insistence of the HHS-OIG. The representatives suggested that DOJ’s willingness to accommodate the HHS-OIG violates the part of the guidance that requires that an individual provider’s unique circumstances be taken into account when reaching a settlement. They consider the imposition of corporate integrity agreements to be particularly troublesome in cases where hospitals settled for simple repayment without False Claims Act damages and had not demonstrated serious billing problems. However, at the four U.S. Attorneys’ Offices we visited, we found that 4 of the 11 closed PPS Transfer and Pneumonia Upcoding cases we reviewed were resolved without the imposition of corporate integrity agreements. Although corporate integrity agreements were imposed in the remaining cases, all of these cases required repayment of the original overpayment and additional damages. The HHS-OIG makes an independent decision whether to require a corporate integrity agreement as part of a settlement; it also has its own guidance addressing participation in national initiatives. Representatives from the state hospital associations we contacted did not have specific complaints regarding the way U.S. Attorneys’ Offices were conducting either the PPS Transfer or the Pneumonia Upcoding initiatives. These associations also did not identify instances of U.S. Attorneys’ Offices failing to comply with the guidance. Some associations acknowledged the willingness of the offices to develop an acceptable investigative process. In addition, they noted that some Assistant U.S. Attorneys have developed extensive knowledge about Medicare billing requirements and provide reasonable opportunities to present their positions. We will continue to solicit the concerns of the hospital community regarding DOJ’s implementation of the False Claims Act guidance when we prepare our 2002 mandated report. DOJ seems to have made substantive progress in ensuring compliance with the False Claims Act guidance. It has strengthened its oversight of U.S. Attorneys’ Offices. The review of each district’s compliance now appears to be an integral component of the periodic evaluation conducted at all U.S. Attorneys’ Offices. These evaluations seem to be effective in identifying areas of vulnerability leading to corrective action taken by the local district. Further, each U.S. Attorney’s Office participating in a national initiative is required to certify that it has complied with the guidance on an annual basis. DOJ’s implementation of the two most recent initiatives, the PPS Transfer and Pneumonia Upcoding projects, appears to be consistent with the guidance, based on our visits to a limited number of offices. Each working group has taken the lead in developing the legal and factual basis for its initiative. Their development of detailed claims data and other relevant materials, such as model contact letters, has helped to promote consistency among the districts in their implementation of the initiatives. In our visits to several U.S. Attorneys’ Offices, we found that attorneys were conducting their investigations in accordance with the guidance. They coordinated their activities with the working group to ensure consistency, but took into account the unique factors surrounding each hospital’s circumstances. This flexibility is in keeping with the principles outlined in the guidance. We provided a draft of our report to DOJ for comment. Officials from DOJ’s Executive Office for U.S. Attorneys and its Civil Division provided oral comments, in which they generally concurred with our findings and conclusions. They also provided several technical comments, which we incorporated as appropriate. We are sending copies of this report to the Honorable John Ashcroft, Attorney General of the United States, the Honorable Tommy Thompson, Secretary of HHS, and other interested parties. We will make copies available to others upon request. If you or your staff have any questions about this report, please call me at (312) 220-7600, or Geraldine Redican-Bigott at (312) 220-7678. Other major contributors were Suzanne Rubins and Frank Putallaz. Medicare Fraud and Abuse: DOJ Has Made Progress in Implementing False Claims Act Guidance (GAO/HEHS-00-73, Mar. 31, 2000). Medicare Fraud and Abuse: DOJ’s Implementation of False Claims Act Guidance in National Initiatives Varies (GAO/HEHS-99-170, Aug. 6, 1999). Medicare Fraud and Abuse: Early Status of DOJ’s Compliance With False Claims Act Guidance (GAO/HEHS-99-42R, Feb. 1, 1999). Medicare: Concerns With Physicians at Teaching Hospitals (PATH) Audits (GAO/HEHS-98-174, July 23, 1998). Letter to the Committee on Ways and Means, B-278893, July 22, 1998. Medicare: Application of the False Claims Act to Hospital Billing Practices (GAO/HEHS-98-195, July 10, 1998).
|
In June 1998, The Department of Justice (DOJ) issued guidance on the fair and responsible use of the False Claims Act in civil health care matters. This report evaluates DOJ's efforts to ensure compliance with the guidance and focuses on the application of the guidance in two recent DOJ initiatives-the Prospective Payment System (PPS) Transfer and Pneumonia Upcoding Project. GAO found that DOJ has taken steps to further strengthen its oversight of compliance with its False Claims Act guidance. These steps include (1) reviewing each U.S. Attorneys Office's compliance with the guidance as part of the periodic evaluation of all U.S. Attorneys' Offices, (2) requiring all U.S. Attorneys' Offices involved in civil health care fraud control to certify their compliance with the guidance, (3) forming working groups to coordinate national initiatives, and (4) maintaining ongoing contacts with participating U.S. Attorneys' Offices to help ensure that they are complying with the guidance. GAO also found that DOJ is implementing the PPS Transfer and Pneumonia Upcoding projects in a manner consistent with the guidance.
|
Services purchased from foreigners are considered U.S. imports: a U.S. import occurs when a U.S.-based company pays for a service produced abroad and supplied to the United States (either to the company or directly to its customers, as in the case of the call center). Although the service (e.g., a computer program, a database, or a telephone call) may be supplied digitally through telecommunication lines, rather than physically crossing the border like a good (e.g., an automobile import), it still is supplied by a foreign-based producer and paid for by a U.S.-based importer. Most U.S. domestic output consists of services. In 2002, services-producing industries accounted for about 78 percent of the U.S. private sector economy (when measured in terms of gross domestic product) compared to 22 percent for goods-producing industries. (See fig. 1.) Similarly, U.S. private sector employment is concentrated in service-producing industries (79 percent) compared to goods-producing industries (21 percent). However, it is important to note that goods-producing industries may also employ workers in “services” occupations (e.g., computer programmers or accountants). Services are a relatively small share of U.S. imports, compared with their share of the U.S. economy. Services make up about 16 percent of total U.S. imports, compared with 84 percent of imports covered by goods. (See fig. 2.) Services make up a greater share of U.S. exports but still account for only 30 percent of the total. Services trade may be relatively small relative to the size of services output in the U.S. economy partly because some services (e.g., haircuts, housing, and hospitals) are difficult or impossible to trade internationally. Overall, U.S. imports of services accounted for only about 3 percent of U.S. consumption of services in 2002. According to the World Trade Organization, the United States is the world’s largest importer of commercial services, with 13.3 percent of the world’s share. (See fig. 3.) The United States is the world’s largest exporter of commercial services, as well. Overall the United States exports more services than it imports and therefore maintains a surplus in services trade. The term “offshoring” generally refers to an organization replacing services produced domestically with imported services. However, no commonly accepted definition for offshoring exists, and the term has been used in public debate to include several other types of business activities. Services offshoring has been facilitated by improvements in information technology, decreasing data transmission costs, and expanded infrastructure in developing countries. Organizations may choose to move some business functions, such as accounting and payroll operations, offshore to gain certain benefits, such as lower labor costs and access to skilled workers. Nevertheless, organizations also face risks, which influence their decisions whether or not to offshore certain business functions. Business functions that are offshored tend to share some common characteristics related to job content and customer focus. Based on BLS data and other sources, some analysts have also identified occupations that appear to be vulnerable to being offshored. U.S. organizations, such as private firms or governments, may decide to import certain services from offshore that they had previously obtained domestically (whether through their own production or from another domestic firm). This is commonly called offshoring. However, no standard definition of offshoring exists, and the term has been used broadly to discuss a range of business activities related to international trade and foreign investment. In addition, definitions of offshoring frequently define it as imports or investment that result in the displacement of U.S. production and employment. In table 1 we present several types of business and government activities associated with offshoring. We also indicate the potential data sources for each type of activity that we discuss later in this report. The first two activities in the table are widely associated with offshoring. The third, fourth, and fifth examples show more complex business activity, which may involve aspects of offshoring. The sixth example involves government offshoring activities. Definitions of offshoring and related business activities are discussed in more detail in appendix II. All the activities listed in table 1 also have the potential to impact a variety of economic measures. These impacts are typically identified through economic modeling and not through direct data reporting. This is because either the impacts are difficult to capture directly or because they are one of many impacts on broad, aggregate measures of economic activity. These measures can include, but are not limited to, consumer and producer prices, productivity, profits, job creation, and economic growth. Offshoring of services has been encouraged by information technology (IT) improvements and expected business benefits. In particular, recent developments in the telecommunications industry, such as technology improvements, infrastructure growth in developing countries, and decreasing data transmission costs, have facilitated the use of offshoring. First, according to several studies, improvements in telecommunications capabilities, such as advances in routing and switching technologies that enable the distribution of voice and data services, have increased the reliability and service quality of global voice, data, and Internet communications. Second, the growth of the global telecommunications infrastructure has provided developing countries cost-effective infrastructure options, such as wired landline and satellite communication services to communicate across national borders. Third, global data traffic has substantially increased since the early 1990s, while the cost of transporting data has declined, thereby making the offshoring of services that rely on the transmission of data more cost effective. Other IT advances, such as greater standardization of business applications and network protocols, have increased system interoperability and thus further facilitated offshore sourcing. Among others, universal computing standards and protocols, such as the Transmission Control Protocol/Internet Protocol, have enabled businesses to communicate worldwide through the use of e-mail and collaborative tools, such as video conferencing, instant messaging, and shared whiteboard technologies. Additionally, the worldwide use of the personal computer in conjunction with the global availability of the Internet have enabled organizations to digitally share and transmit documents over private networks using encryption applications for added security. According to a technology research firm’s forecast, the use of private networks will continue to increase due to widely available network-based solutions that support increased access options, security, and new applications. In addition to technological factors that allow services to be conducted offshore, an organization may choose this option because it expects to realize various benefits. According to several business studies, the primary reason organizations engage in offshore sourcing is to reduce costs. Specifically, due to competitive pressures and increasing customer demand for innovative products, businesses are using offshoring as a way to reduce their internal costs structures, such as sales, general, and administrative costs. The labor cost differential between the United States and developing nations can be significant. According to a technology research firm, organizations that offshore accounting and customer service to China can potentially save 30 to 50 percent in labor costs compared to keeping those processes in Tokyo, London, or Chicago. Moreover, the hourly wage rate for programmers in the U.S. can be up to three times that of programmers in India. For example, a leading e-business software company reportedly was able to achieve 40-45 percent lower costs per overseas employee compared to hiring equivalent senior developers in the United States. Other expected benefits of offshoring include access to skilled workers and providers that use disciplined processes and the facilitation of a round-the- clock work schedule. For example, according to the National Association of Software and Service Companies, India’s chamber of commerce for the IT services and software industry, approximately 140,000 students graduated in an IT-related engineering field from degree and diploma colleges and universities in India during the 2003-2004 academic year. According to one study, a media and publishing company incorporated highly skilled overseas senior developers, architects, and project managers into its Web site development project, which reportedly led to an accelerated delivery schedule, reduced costs, and increased customer service. In addition, as of July 14, 2004, of the 74 worldwide organizations that have been certified at the highest rating in the Capability Maturity Model Integration model created by the Software Engineering Institute at Carnegie Mellon University, 48 are headquartered outside the United States. This is important because our work and other best practices research have shown that the application of rigorous practices to the acquisition or development of IT systems or the acquisition of IT services improves the likelihood of success. Moreover, offshoring can facilitate operating on a 24-hour, 7-day schedule across numerous time zones, thereby allowing companies to meet worldwide customer needs. For example, according to one study, a financial services unit of a Fortune 50 company has operations in overseas countries that provide around-the- clock in-bound and out-bound call centers, accounting services, IT help desks, document storage, and software implementation. Although offshoring can be beneficial, organizations also face risks that are relevant to decisions about whether or not to offshore services. Commonly cited offshore sourcing risks include unrealized cost savings due to unforeseen expenses, geopolitical concerns, cultural differences, and infrastructure instability. For example, organizations that engage in offshoring can incur additional costs in conducting overseas business operations in order to, for instance, establish high-speed telecommunications links, acquire new software licenses, and pay for travel expenses. According to one study, expectations in cost reduction are not always met because outsourcing contracts can be developed with a poor understanding of current costs and little insight into how costs will change as the environment changes. In addition, it is important to consider the destination country’s stability, legal system, and contract enforcement in making offshoring decisions. For example, one factor in assessing the legal system is whether adequate intellectual property protections, such as laws and regulations, are in place to ensure that sensitive company data are protected from unauthorized disclosure or use. Cultural differences can also pose a potential risk because business attitudes, including timeliness and punctuality, country accents, and holiday schedules, may be different than those in the United States. For example, overseas call center and customer service employees have reportedly sometimes found it difficult to establish a rapport with consumers due to a lack of understanding of language accents. A leading financial services company reportedly requires its application managers to go through a cultural exchange program designed to foster a better understanding of domestic and overseas business norms. Lastly, despite public utility infrastructure improvements, some countries’ businesses still face infrastructure risks, such as the reliance on energy, telephone, and networks that may be susceptible to intermittent disruptions and outages. Our prior work indicates the importance of organizations considering both the benefits and risks associated with sourcing decisions before adopting any particular approach, such as offshoring, into their business strategies and plans. Business functions and service occupations associated with offshoring, combined with other distinguishing process features, provide additional detail on offshoring of services. Business functions associated with offshoring tend to be those that are digitized, capable of being performed at a distance, and whose product delivery can be managed using relatively new forms of advanced telecommunications. Examples of these business functions include software programming and design, call center operations, accounting and payroll operations, medical records transcription, paralegal services, and software research and testing. According to some studies, the criteria for successful offshoring of services include business functions that involve 1) a high information content that can be standardized and digitized, 2) job processes that can be separated and documented step-by-step, and 3) no face-to-face customer service requirements. Although occupations associated with services offshoring were predominantly in the IT sector, IT-enabled jobs are also vulnerable to offshoring and span several occupational classifications. These categories include business and financial operations, office and administrative support, medical transcriptionists, paralegals and legal assistants, and architecture and engineering. In comparing services offshoring to the parallel offshoring dynamic in the manufacturing sector, one recent study states that services offshoring is structurally simpler in terms of resources and space and equipment requirements. The authors conclude that offshoring of services may therefore proceed more quickly. U.S. government data provide some insight into the trends in offshoring of services by the private sector, but they do not provide a complete picture of the business transactions that the term offshoring can encompass. In particular, they do not identify U.S. imports of services previously produced by U.S. employees. Similarly, federal procurement data on purchases of IT and other services provide some insights, but it can be difficult to determine where such work is performed. The available data indicate that the trend in offshoring show little change over the past 5 years. U.S. government data provide some insight into the trends in offshore sourcing of services by the private sector, but they do not provide a complete picture of offshoring of the business transactions that the term offshoring can encompass. The Department of Commerce’s Bureau of Economic Analysis (BEA) collects data on trade (imports and exports) in private services between the U.S. and foreign entities. BEA includes in “Total Private Services” trade five subcategories: travel, passenger fares, other transportation (e.g., freight and shipping), royalties and license fees, and “Other Private Services.” The category “Other Private Services” includes many of the services that are generally associated with offshoring. Imports in this category have grown from $23.9 billion in 1992 to $69.4 billion in 2002. These imports represent about a third of 2002 services imports. The category “Other Private Services” is further divided into six subcategories: education; financial services; insurance services; telecommunications; business, professional and technical services; and other services. Services captured in the subcategory of “Business, Professional, and Technical” (BPT) services are those that are generally associated with offshoring, such as accounting and bookkeeping and computer programming services. BEA publishes detailed data annually for more than 20 types of BPT services. In 2002, total BPT services accounted for $37.5 billion, or 54 percent of “Other Private Services.” (See fig. 4.) The Department of Commerce’s trade data show that imports of services associated with offshoring are growing. For example, U.S. imports of BPT services grew from $21.2 billion in 1997 to about $37.5 billion in 2002, an increase of 76.9 percent. U.S. exports of BPT services increased 48.6 percent during this same period. It is important to note that these import data show that U.S. entities have been purchasing these services offshore, but they do not indicate whether these entities had previously been purchasing these services from domestic U.S. sources. In addition, BEA data differentiate between affiliated and unaffiliated trade, where affiliated trade occurs between foreign affiliates and their parent companies. In 2002, affiliated trade accounted for $26.8 billion, or 71 percent of all BPT services imports. Data for affiliated trade in BPT services are not broken down by country or by the particular subcategories of BPT services discussed below. Data for unaffiliated trade do provide this detail and show that U.S. imports of BPT unaffiliated services grew from $6.4 billion in 1997 to $10.7 billion in 2002, an increase of 67.2 percent. This partial list of subcategories under BPT services include the following offshored services: accounting, auditing, and bookkeeping; architectural, engineering, and other technical; computer and data processing; database and other information; management, consulting, and public relations; and research, development, and testing. Certain unaffiliated BPT services imports—most notably accounting and auditing services; computer and data processing services; and research, development and testing services—have grown rapidly in recent years. For example, imports of computer and data processing services have grown steadily from $636 million in 1997 to $1.5 billion in 2000 and declining to $1.1 billion in 2002 for an overall increase of 66.2 percent between 1997 and 2002. The increase in 2000 may be due in part to the Year 2000 date change crisis. U.S. firms, in response to a tight supply of computer programmers in the late 1990s, turned to companies principally located in India to make the code fixes needed to avert problems with computer systems when the year 2000 arrived. (See fig. 5.) Although much attention is currently focused on developing countries that are increasingly exporting services to the United States, Canada, and the United Kingdom, nevertheless these three countries remain the leading exporters of services, both for Total Private Services and the subcategory unaffiliated BPT services. In 2002, Canada and the United Kingdom accounted for 43.6 percent of all imports of unaffiliated BPT services to the United States, and they were also major destinations for U.S. exports of these services. (See fig. 6.) As figure 6 also shows, India is ranked eighth among countries from which the United States imported unaffiliated BPT services in 2002. Some BPT services’ subcategories (e.g., data and computer processing services) are available by country. In some BPT services’ subcategories, imports from India have increased. In particular, imports of India’s computer and data processing services rose from $8.0 million in 1997 to $133.0 million in 2000, but then declined to $76.0 million in 2002, for an overall increase of 850 percent from 1997 to 2002. (See fig. 7.) Besides importing services provided offshore, the United States is also a supplier of services to the rest of the world. These U.S. services exports include some services that can be characterized as “inshoring.” While we did not examine U.S. services exports in detail, some of these exports would contribute to domestic U.S. employment. In addition, the United States maintains a trade surplus in private services and most subcategories of services trade. BEA estimates that in 2002, the United States exported $279.5 billion and imported $205.2 billion in Total Private Services, for a surplus of $74.3 billion (down from a high of $87.9 billion in 1997). The average annual growth rate for U.S. Total Private Services from 1992 to 2002 was 5.6 percent for exports and 7.3 percent for imports. (See fig. 8.) See appendix III for a table on U.S. imports and exports by country of trade in business, professional, and technical services and for further details on the limitations of that data for analysis of offshoring. U.S. government data on direct investment abroad by U.S. multinational companies producing services abroad provide information on aspects of offshoring, such as supplier countries and the distribution of labor between parent companies and affiliates. U.S. direct investment in developing countries that are frequently cited as suppliers of offshore services (e.g., India, the Philippines, and Malaysia) is relatively small—about 4 percent or less of total U.S. direct investments in each case. U.S. direct investment in these countries tends to be concentrated in the manufacturing sector and, to a more limited extent, in certain services industries associated with offshoring, such as the professional, scientific, and technical industry, and the information industry. However, the majority of U.S. direct investment is concentrated in other developed countries. For example, 60 percent of U.S. direct investment abroad in 2002 was accounted for by the European Union, Canada, and Japan. Table 2 lists selected developed and developing countries and their share of total U.S. direct investment abroad in 2002 (the most recent year available), as well as these countries’ share of investment in different industries. See appendix IV for a table on U.S. foreign direct investment and further details of the limitations of that data for analysis of offshoring. Data on U.S. multinational companies’ operations also provide information on the distribution of labor and assets between the U.S.-based parent companies and their foreign-based affiliates. These data show that the share of these companies’ employment in the United States has declined somewhat over the past decade, although about 71 percent of their employment is still based in the United States and only 10 percent of their overseas employment is located in developing countries. (See table 3.) However, according to BEA, the labor force in low-wage countries is growing at a slightly faster rate (7 percent per year) than the labor force in high-wage countries (3 percent) from 1991 to 2001. Similarly, the great majority of U.S. companies’ assets are located in the United States (70 percent) or in other developed countries (26 percent), rather than in developing countries (4 percent). Data on operations of majority-owned foreign affiliates of U.S. multinational companies indicate that they are primarily investing in overseas markets to produce services for those markets, rather than supplying services back to the United States. As figure 9 shows, except for a few countries (e.g., Israel, Bermuda, and Barbados) less than 15 percent of the sales of U.S. companies’ majority owned-foreign affiliates’ services are exported to the United States. Rather, most of the services sales take place in the foreign market in which the affiliate operates or in another foreign market. According to BEA, the available data on U.S. multinational companies’ operations do not show whether multinational companies’ new investments are replacing their U.S.-based operations or substituting for exports to foreign markets that would have been supplied by their U.S.- based operations. However, the data currently available do not show any significant shifts or sizable investment in developing countries that may be used as a platform for offshoring. As more recent data become available, they will provide additional insight into the importance of these trends. The total dollar value of the federal government’s services contracts with offshore performance or manufacture locations has increased over the past 5 years; however, relative to all federal contracts for services, the 5-year trend in offshoring is relatively stable. In the federal government, the General Services Administration’s Federal Procurement Data System (FPDS) is the central database of information on procurement actions. FPDS contains detailed information on contracting actions for amounts over $25,000, including the amount obligated, the types of goods and services purchased, and information on principal place of performance and country of manufacture. However, FPDS has limitations and may understate the total amount of IT and other services that are offshored by the federal government. For example, some agencies are not required to report their procurement activities to FPDS, and the system excludes detailed information on contract actions of $25,000 or less and purchase card data. Moreover, as we have previously reported, because FPDS relies on federal agencies for procurement information, these data are only as reliable, accurate, and complete as the information provided by the agencies, and not all agency data are reliable. In particular, the principal place of performance for the service can be difficult to determine, especially when work is performed at multiple contractor and/or subcontractor locations. According to a GSA official responsible for this system, agencies may report company billing or home office addresses if the place of performance cannot be determined. Although a reliable total amount of the federal government’s offshoring activities is not available from FPDS, the FPDS data over the last 5 years is sufficiently complete and consistent to be used to illustrate trends. As shown in figure 10, from fiscal years 1999 through 2003, the total dollar value of all services contracting actions increased about 40 percent. Moreover, during the same period, the total dollar value of all services contracts with performance or manufacture locations in foreign countries increased by about 64 percent, from $6.4 billion in fiscal year 1999 to $10.6 billion in fiscal year 2003. However, the percentage of total dollars associated with foreign performance or manufacture locations relative to the total dollar value of all services contracts performed in all locations (U.S. and foreign) remained relatively stable, with a range of 5 percent to 7 percent over the 5-year period. Similarly, in the case of IT services alone, the percentage of total dollars associated with foreign performance or manufacture locations was relatively stable throughout the period, ranging from 1 to 3 percent of the total value of IT services contracts. In addition, there were large dollar value fluctuations (both increases and decreases) from year to year. With respect to state governments’ procurement of services from offshore sources, comprehensive data depicting the extent to which offshoring is used do not exist. However, there are anecdotal accounts of the use of offshoring by state governments. For example, in response to a legislative request, one state asked all its cabinet agencies, statewide elected officials, and institutions of higher education whether they had knowledge of any contracts awarded by their respective organizations in which all or part of the work was being performed overseas. Responses showed that 29 of 42 organizations reported knowledge of some contract awards that involved overseas work, such as contracts for software development performed by an Indian subsidiary of a U.S. firm. Nevertheless, organizations representing state executive and legislative officials, chief information officers, and procurement officials told us that they had no comprehensive data, studies, or research that indicated how much state governments were using offshore sourcing in procuring IT and other services. Offshoring has direct, short-term effects on U.S. employment that available data can partially capture. One federal employment data series identifies some job layoffs that are attributable to offshoring. In contrast, other federal employment data series provide contextual information about changes in employment levels for various industries and occupations, including those that have been associated with offshoring. Private sector studies have sought to analyze not only the employment effects of offshoring but also the indirect, longer-term effects on the broader economy. The Department of Labor collects a range of labor market data that provide information on trends in employment, but generally its data series were not designed to identify causes for employment changes. As a result, these data do not lend themselves to providing information on the employment effects of offshoring. However, the Mass Layoff Survey provides some limited information on offshoring, and several other labor data series show general employment trends that provide a context for understanding offshoring’s effects. In addition to offshoring, other factors affecting employment trends in the last few years include the economic recession and the collapse of the dot.com bubble. The Labor data series include the following: Mass Layoff Survey (MLS). The MLS is a national survey that collects information on reasons for long-term job losses with reports published by the Bureau of Labor Statistics (BLS) on a quarterly basis. The survey is a federal-state program which tracks major job cutbacks based on state unemployment insurance databases. Establishments with over 50 employees that have at least 50 initial unemployment insurance claims during a 5-week period are contacted by the state agency. If the separations are for at least 31 days, data are collected from the employers on the total number of separations as well as the reasons for separation. The employers are asked to provide the reason for the layoff, and the state official then picks from a list of more than 25 possible reasons for the layoff action. Prior to 2004, one of these reasons was “overseas relocation” allowing the MLS to capture limited data on offshoring activity. In January 2004, to enhance its collection of offshoring-related data, the BLS began to ask specific questions about job losses involving domestic and overseas work relocation. While this change will result in better information on offshoring in the future, the 2004 data on overseas relocation are not comparable to pre-2004 data. Current Employment Statistics (CES). The CES survey is an employer- based survey of payroll records that provides monthly data on the number of payroll jobs in nonfarm industries. CES data, which cover more than 300,000 businesses on a monthly basis and provide employment statistics by industry, are often used as indicators of current economic trends. CES provides information on employment trends in industries, including those that have been associated with offshoring. Occupational Employment Statistics (OES). The OES program provides information on employment and wages by occupation. The OES survey gathers data from 400,000 establishments each year on employment and wages. The survey covers 400 industries, 23 major occupational groups, and more than 770 detailed occupations. Until 2001, the OES survey sampled about 400,000 establishments during the fourth quarter of each year. In November 2002, the OES survey began sampling about 200,000 establishments in November and May of each year. OES provides information on employment trends in occupations, including those that have been associated with offshoring. Employment Projections. BLS uses projections of the labor force and economic growth, as well as expert judgments about future trends in different occupations, to develop an occupational projection model. Because it is based on interviews with employers, MLS provides a vehicle for collecting direct, timely data on offshoring. Due to the MLS’s coverage limitations, however, its data should be viewed as an imperfect indicator of offshoring-caused job losses. MLS identifies only a portion of total layoffs because it does not include small establishments or layoffs involving fewer than 50 employees. For example, in 2003, the survey covered 4.6 percent of all U.S. establishments and 56.7 percent of all U.S. workers. In addition, some employers may be unwilling to provide information when interviewed about reasons for layoffs. For the first quarter of 2004, 7.2 percent of firms with mass layoff events refused to participate in the survey. Pre-2004 MLS data had additional limitations regarding reasons for layoffs. According to BLS officials, in surveys prior to 2004, offshoring may have been involved in some instances when reasons such as “financial difficulty,” “business ownership change,” or “reorganization within the company” were provided by MLS respondents. Even with these limitations, MLS data provide some information that is useful for understanding services offshoring. For example, the data show that “overseas relocation” was given as a reason for mass-layoff job loss for a small fraction of workers laid off during the 1996-2003 period—of 1.5 million layoffs reported in the 2003 MLS, 13,000 (0.9 percent) were reportedly due to overseas relocation. The data also indicate that almost all layoffs (about 96 percent) occurred in the manufacturing sector. The data also indicate that layoffs associated with “overseas relocation” reported by MLS peaked in 2002 (after rising sharply in 2001) but declined in 2003. Preliminary data for the first quarter of 2004 show that of a total of 239,361 separations, 4,633 (or 1.9 percent) were attributable to offshoring. Domestic work relocation accounted for 9,985 separations (4.2 percent). Although general employment data such as CES are not designed to isolate job losses attributable to any specific causes, they can provide some contextual information relevant to understanding job losses. CES data indicate that overall employment, including industries associated with offshoring, began to decline after peaking in 2001. Figure 11 shows percentage changes in employment between March 2001 (the beginning of the recession) and June 2004 for selected industries associated with offshoring. Job declines after March 2001 varied widely among industries associated with offshoring and generally were more severe than declines in the overall private-sector economy. For example, the average annual rate of decline over this period was 5.7 percent in computer systems design and related services industries and 7.9 percent for accounting and bookkeeping, while the decline in the business support services was about 1.2 percent. During this period, total nonfarm employment increased by 0.2 percent. CES data show recent signs of improvement in employment. After falling in each of the first three quarters of 2003, total nonfarm employment edged up in the fourth quarter. (See table 4.) From the last quarter of 2003 until the second quarter of 2004, the overall economy gained about 1.1 million jobs (a 0.9 percent increase). By comparison, selected industries associated with offshoring saw deeper job losses and slower, more volatile recovery. Job loss for these industries began to gradually ease in the second quarter of 2003. Overall, employment in the selected industries has increased by about 21,000 jobs between the second quarter of 2003 and the first quarter of 2004 (a 0.3 percent increase). In a few of these industries, job losses appear to have reversed. The employment level in the architectural and engineering services industry began to rise in the second half of 2003. Other industries, such as legal services, computer systems design and related services, business support services, and Internet service providers, search engines, and data processing, experienced job gains in the second quarter of 2004. The changes in the national employment level over time reflect the net result of jobs added and jobs eliminated—for all causes. Services offshoring has been frequently associated with the jobless recovery of 2003, but studies suggest that much of the job loss is due to the 2001 recession, increases in productivity, and corrections in the wake of the dot.com bubble. However, general employment data do not allow isolating job losses attributable to offshoring. It is also important to note that even if there were no net job losses during a particular time period—meaning that the number of job losses did not exceed the number of job gains—it is still possible that some jobs could have been lost as a result of offshoring. Most industries associated with services offshoring that saw sharp declines after 2001 had also experienced unusually strong job growth during the previous decade. As figure 12 shows, during this expansion, growth in employment was especially strong in IT-related sectors. For example, employment in the computer systems and design industry grew at an average annual rate of 11.1 percent, compared with 1.7 percent in the total nonfarm employment. This supports the view that at least some of recent job losses are due to the collapse of the dot.com. bubble in IT-related sectors. Although some analysts have raised concerns that services offshoring has been affecting higher-skill, higher-paying jobs, the occupational earnings data show a mixed picture. As shown in table 5, OES data indicate that average wages of most occupations associated with offshoring are above the U.S. average wage. However, the average wages for the two largest occupations in terms of numbers of workers (office and administrative support and sales and related occupations) are below the U.S. average wage. Like CES, OES employment data are not designed to isolate employment changes attributable to specific causes. The data, however, offer recent employment trends by occupation relevant to understanding offshoring. The OES data indicate that some occupations associated with offshoring saw declines in employment, while others saw increases in employment between 2001 (the year of recession) and 2002—the latest year for which comparable occupational data are available. Table 6 shows percentage changes in employment in 2001 and 2002 for selected occupations associated with offshoring. Employment in management, computer and mathematical science, and architecture and engineering declined by 1.7 percent, 1.9 percent, and 3.1 percent, respectively. Employment in business and financial operations, legal, and life, physical, and social science categories increased by 2.0 percent, 2.8 percent, and 1.0 percent, respectively. On average, employment in all occupations declined by 0.4 percent. BLS’s employment projections for 2002 through 2012 provide some insight into the future trend of employment and, to some extent, of offshoring. Total employment is projected to increase by 21 million jobs to 165 million jobs in 2012. The projections, however, indicate a slower overall growth trajectory than the previous projections (for 2010), in part reflecting the impact of the 2001 recession. While total employment is projected to increase by 14.8 percent to 165.3 million jobs over the 2002 through 2012 period, this figure represents 2.4 million fewer jobs than the level projected for the 2000 through 2010 period. Projections indicate that IT-related occupations are expected to grow faster than most occupations by 2012. Seven of the 30 fastest-growing occupations are computer related, all requiring a bachelor’s degree or higher. The rate of growth for these occupations for the 2002 through 2012 projections is significantly lower than the rate projected for the period 2000 through 2010. Thirteen of the occupations with the largest projected declines are office and administrative support, none requiring a bachelor’s degree. Generally, the rate of decline for these occupations increased from the 2010 to 2012 projections. According to BLS officials, BLS did not systematically take into account offshoring in its 2012 employment projections, prepared in 2003, but some analysts took offshoring into account during the survey when they were considering projected changes in occupational staffing patterns. Moreover, some of the impact of recent offshoring was likely reflected in the baseline employment level used in the projections. As a result, the 2012 projections, which generally indicate a lower level of employment, a slower rate of growth for many occupations, and a faster rate of decline for some occupations than do the 2010 projections, might partially reflect the impact of offshoring. The difference between the two sets of projections, however, also reflects the impact of other factors, such as the collapse of the dot.com bubble, recession, and increases in productivity. BLS is in the process of implementing changes to better capture the impact of offshoring trends on employment patterns for its 2014 projections. As part of this effort, BLS is developing a list of occupations that face high risk of offshoring; the list is intended to alert BLS analysts to systematically seek out better information on offshoring in determining employment trends in those occupations. BLS does not expect to produce quantitative assessments of offshoring. Some private sector research studies have sought to provide projections of the likely number of jobs that might be affected by offshoring in future years. Other researchers have provided insight on, and to some extent quantified, the broader effects of offshoring on other economic factors such as productivity, prices, and economic growth. Private researchers and consultants have attempted to forecast the effects of offshoring on employment in certain occupations potentially affected by offshoring. The studies vary by the range of industries or occupations examined, the economic variables measured, and the time frames of their analyses. However, these studies face challenges in estimating the effects of offshoring because these studies often base their projections on federal statistics, and, as previously described, federal statistics currently provide limited information on the current level and effects of offshoring. A number of these studies forecast the effect of offshoring on U.S. employment in the industries or occupations that may be affected by offshoring. For example, some studies project that between 100,000 and 500,000 information technology jobs will be displaced within the next few years, and potentially several million jobs across all occupations will shift outside the United States over the next decade. A widely cited study by Forrester Research estimates that about 3.3 million jobs across all occupations will be shifted outside the United States by 2015. Of the 3.3 million, Forrester estimates that about 600,000 will move between 2000 and 2005. The study looks across services occupations from the OES series and subjectively weights the impact of offshoring on current employment in the occupation over time. Table 7 presents a summary of several studies that project the effect of offshoring on U.S. employment. Many of these studies of job losses do not take into account other economic effects of offshoring that may offset the job losses, or they focus on only one industry, such as financial services. For example, Forrester does not try to estimate any other effects from offshoring, such as potential expansion of employment in other sectors. In addition, some studies base their estimates of future employment effects due to offshoring on the employment level at a given point in time, rather than taking into account how the size of the labor market or a particular industry may change over time due to other factors. Also, several of the studies rely on discussions and interviews with industry representatives, rather than statistically valid surveys. Although the importance of these projected job losses to particular firms and industries may be considerable, overall they are relatively small in terms of the U.S. economy. For example, BLS’s Business Employment Dynamics (BED) series shows that the U.S. economy creates and destroys millions of jobs each year. In 2002, for example, gross quarterly job gains and job losses averaged 7.9 million and 8 million, respectively. Even during the economic expansion period in the late 1990s, job losses ranged between 7.4 million and 8.4 million per quarter, although job gains were even larger. Some studies have attempted to further identify and, to some extent, quantify the impacts of offshoring beyond the potential number of jobs lost in particular occupations. For example, an Institute for International Economics study argues that data on productivity are likely to be positively affected in industries that are now able to afford IT services that are relatively less expensive because of offshoring. The study compares services offshoring to the increased use of information technology hardware and the resulting productivity improvements by a range of industries during the 1990s due to falling prices from cheaper imports. Similarly, a study by the economic consulting firm Global Insights uses a macro-economic model to produce data estimates on productivity benefits, as well as other potential effects on the economy due to offshoring. Assuming that offshoring leads to lower prices for information technology services, the study predicts that by 2008 offshoring will lead to lower inflationary pressures and, therefore, to lower interest rates and borrowing costs and ultimately a higher gross domestic product of more than $100 billion (an increase of more than 0.1 percent over estimated growth without offshoring). Like other federal statistics discussed above, data on productivity, prices, and growth would capture these effects, but it may be difficult to differentiate the effects of offshoring from other economic phenomena occurring simultaneously. In addition, the magnitude of these effects may be limited. As discussed in the background section, annual U.S. imports of services only account for about 3 percent of total U.S. consumption of services, and offshored services comprise only a subset of total services imports. Other researchers argue that the effects of offshoring may show up in data on the distribution of earnings among workers. For example, a study by the Brookings Institution argues that offshoring may impact the compensation of different types of workers over the longer term, rather than on the overall level of employment in the United States, as well as affecting the share of returns that go to profits rather than workers. Similarly, Dani Rodrik of Harvard University has argued that in a global economy, international trade generally increases the size of the labor pool companies can draw upon to produce their products. Although this increased competition among laborers may not always result in direct job losses, it can place downward pressure on wages as businesses use the threat of relocation to affect the bargaining position of workers. Therefore, workers in occupations that face greater labor market competition from abroad may experience stagnant or declining wages and other compensation, relative to other workers. These studies suggest that data on these types of distributional effects are important to examine, since the direct impact of offshoring on labor and other economic variables may be hard to capture or distinguish from other factors that affect the overall economy. Recent growth in offshoring has created an extensive debate about the extent of this activity, as well as the advantages and disadvantages for U.S. workers, U.S.-based firms, and for the U.S. economy as a whole. The reasons for the rapid growth are relatively well understood and have to do with information technology and the adoption of offshoring as a business strategy. On the other hand, less is known about the specific extent of offshoring to date. Federal statistics provide some clues as to the extent of this activity and show that relative to imports of other services, offshoring is a small but growing trend in the U.S. economy. Private sector researchers have provided additional information in the form of forecasts as a result of the high level of interest in this activity. However, a more complete understanding of the extent of this phenomenon will require further efforts. Discussion of this issue is similar in many ways to prior discussions of other significant changes that inevitably occur in a dynamic economy. In these cases, federal statisticians and other researchers attempt to use and modify existing series and develop new measures to provide insight into the phenomena. As more recent data are collected and additional studies are completed, some questions about the extent of the offshoring phenomenon will be addressed. Finally, the policy consequences of this change are an important component of this debate. Policymakers, analysts, and others inside and outside the government combine those statistics with theory and models of the economy to define the indirect and longer-term implications of the particular changes that are of policy interest. To some extent, the policy decisions are dependent upon the results of the ongoing research on the extent of the activity and a better understanding of the indirect effects of this activity on the U.S. workforce and the economy. This research will also help address questions as to the potential policy measures that might have some effect on this activity and that might enhance the advantages or reduce the disadvantages. This study, which focuses on the data that are available on the phenomenon of offshoring, is just one component in this evolving discussion. We provided a draft of this report to the Departments of Commerce and Labor, the General Services Administration, and the Office of Management and Budget. Representatives of Labor, the General Services Administration, and the Office of Management and Budget indicated that they did not have comments. We received written comments from Commerce, which generally agreed with our observations. (See app. V.) Commerce and Labor provided technical comments, which we incorporated in the report as appropriate. In her response to our draft report, the Under Secretary for Economics Affairs at the Department of Commerce stated that Commerce’s statistical agencies are committed to refining their understanding of the issues surrounding offshoring. She noted that “disentangling the causes and effects of changes in production, employment, and incomes involves not simply added data collection but complex analysis….” She characterized this report as a useful reference and suggested that we add a discussion of “inshoring.” We clarified this point by adding the specific characterization of “inshoring” to our discussion in the report of U.S. net exports of services abroad. As agreed with your offices, we are sending copies of this report to interested congressional committees, the Departments of Commerce and Labor, the General Services Administration, and the Office of Management and Budget. Copies will be made available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Mr. Yager on (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix VI. We were asked to (1) describe the nature of the offshoring of IT and other services, (2) discuss what the data show about the extent of this practice, and (3) discuss what available data show about the effects of services offshoring on the U.S. economy, including labor and business. To obtain information about the nature of offshoring sourcing of services, we reviewed available research studies, attended several conferences on the subject, interviewed high-level government representatives at the Departments of Commerce, Labor, and State; the Office of the U.S. Trade Representative (USTR); the General Services Administration (GSA) and the Office of Management and Budget (OMB). We interviewed representatives at several private sector associations representing business and labor interests. We also met with experts who have published on the offshoring phenomenon, and we interviewed representatives of several research organizations that provide industry-wide studies and data. To identify technical factors that encourage offshoring of information technology (IT) and other services and potential business benefits and risks associated with this offshoring technique, we performed a literature search and obtained information from private research firms, such as the Brookings Institution, Gartner, Inc., Meta Group, Inc., McKinsey and Company, Forrester Research, Inc., Yankee Group, and Aberdeen Group. In general, these sources provided consistent information regarding technical advances and potential business benefits and risks associated with offshoring. We determined that the data were sufficiently reliable for the descriptive purposes of the report. We also interviewed organizations representing IT services businesses and workers, including the Information Technology Association of America; India’s IT services and software chamber of commerce, the National Association of Software and Service Companies; the Institute of Electrical and Electronics Engineers, Inc.; American Federation of Labor-Congress of Industrial Organizations; and the Washington Alliance of Technology Workers/Communications Workers of America. To obtain information about the extent of services offshoring, we examined U.S. government data on international trade and foreign investment from the Bureau of Economic Analysis (BEA). We reviewed technical notes in BEA publications and related documentation to assess limitations and the reliability of various data series and discussed these topics with officials at BEA. We also reviewed available research studies, attended a conference on these data, interviewed persons in the private sector familiar with these data, and surveyed the available literature on the subject. We determined that the data were sufficiently reliable for the purposes of this report. To identify trends in offshore sourcing of IT and other services contract work by the federal government over the past 5 years, we obtained data from the General Services Administration’s Federal Procurement Data System (FPDS) on the federal government’s procurement of IT and other services for fiscal year 1999 through fiscal year 2003. To assess the reliability of the FPDS data fields required for this engagement, we performed electronic tests for obvious errors in completeness and accuracy (e.g., we tested for completeness by checking for missing data in key fields dealing with products and services, place of performance, and country of manufacture and found one percent or less missing in all cases). We also discussed the reliability of FPDS data with GSA officials. We determined that the relevant fields were sufficiently reliable for the comparative purposes of this report. Using FPDS data, we calculated for each fiscal year in the 5-year period (1) the total dollar value of IT and other services contracting actions in which an agency reported a foreign country as the principal place of performance or manufacture and (2) the percentage of total dollars associated with foreign performance or manufacture locations relative to the total dollar value of all services contracts performed in all locations (U.S. and foreign countries). All FPDS data cited in the report were adjusted for inflation and represent constant fiscal year 2003 dollars. To identify trends in offshore sourcing of IT and other services by state governments, we contacted the following organizations to request data on states’ use of offshore sources: Gartner, Inc. National Association of State Chief Information Officers National Conference of State Legislatures National Association of State Procurement Officers National Governors Association National Center for Policy Analysis Washington Alliance of Technology Workers/Communications Workers American Federation of Labor-Congress of Industrial Organizations Although none of these organizations could provide or were aware of any comprehensive data, we obtained anecdotal accounts and some limited data on contract awards by specific states in which all or part of the work was being performed in foreign countries. We did not independently verify this information. To determine the effects of services offshoring on the U.S. economy, we examined available federal data as well as private sector studies on offshoring. To determine the effects on the U.S. workforce, we analyzed available U.S. government employment data from the Bureau of Labor Statistics (BLS), including some unpublished data. We cross-checked various employment data and reviewed technical notes in BLS publications to assess data limitations and the reliability of various data series. We compared changes in employment from March through the end of 2003 using the comprehensive Quarterly Census of Employment and Wages (QCEW) and the more timely sample-based Current Employment Statistics (CES) programs. These comparisons showed some divergence in magnitude and direction of change for detailed services industries associated with offshoring. Because the latest QCEW data are available for December 2003, we were unable to determine the extent to which the divergence might affect the March 2001 to June 2004 comparisons discussed in this report. (CES data for March 2003 to March 2004 will be revised to incorporate QCEW data in February 2005.) We also discussed the limitations and reliability of these data with officials at BLS and state employment agencies responsible for collecting them. We determined that these data were sufficiently reliable for the purposes of this report. We also reviewed available research studies, attended several conferences on the subject, and interviewed representatives of private sector associations representing business and labor interests. We also met with experts, interviewed representatives of research organizations that produced industrywide studies and data, and surveyed the available literature on the subject. With regard to private sector studies on the effects of offshoring, we are reporting these studies and their results primarily for descriptive purposes since limited information about offshoring is available. Although we discuss some of the methodological limitations of these studies, we did not assess the studies’ overall validity, accuracy, or reliability. We conducted our review from January to August 2004 in accordance with generally accepted government auditing standards. No commonly accepted definition of offshoring currently exists, and the term has been used in the literature on the subject to include a wide range of business activities. Generally, offshoring is used to describe a business’s (or a government’s) decision to replace domestically supplied service functions with imported services produced offshore. This definition focuses on a business’s sourcing decision—should it produce the services internally, source them domestically, or source them from offshore? The imported services can include a wide range of functions, such as computer programming, payroll and accounting, and customer call centers. When a business replaces services it had produced internally (or had sourced from a domestic supplier) with imported services, those services and the domestic jobs associated with them are said to have been “offshored.” Offshoring, though, has also (though less frequently) been used to describe the movement of domestic production (and the related jobs) offshore. In this case, the definition focuses not on imports of services from abroad, but on U.S. companies investing offshore. The services that companies produce offshore may be used to supply imports to the U.S. market or to supply foreign markets. Companies may decide to invest abroad for a variety of reasons, such as accessing foreign markets, reducing their production costs, or utilizing foreign labor and expertise. In either case, whether focusing on the use of imported services or on moving services production offshore through foreign investment, definitions of offshoring frequently define it in terms of the displacement of U.S. production and employment. U.S. production and employment are affected when U.S. producers replace services produced domestically with imported services. Similarly, when U.S. producers move production operations offshore, U.S. domestic production and employees are affected. Figure 13 shows the complex range of business activities that results from the intersection of imports, investment, and displacement of production and employment. The business activities captured by different definitions of offshoring may also be seen as subsets of the broader concept of globalization, which involves increasing interaction and interdependence among national product and factor markets. In the figure, the upper left oval represents imported services, the upper right oval represents U.S. investment offshore in services production, and the lower center oval represents U.S. production and employment displaced for reasons including offshoring. The darkest shaded regions (marked “A” and “B”) are the business activities most commonly associated with the term “offshoring.” Region A represents those imported services that directly replaced services (and therefore jobs) previously produced domestically. Region B also represents imported services that directly replaced domestically produced services. However, the imports in region B are provided by the U.S. company’s offshore affiliate (either acquired or started through U.S. direct investment abroad). Regions C through F include other business activities that are sometimes included in broader definitions of offshoring or are difficult to distinguish from offshoring in U.S. federal government statistics. For example, region C covers services imports from U.S. companies’ foreign affiliates that did not directly displace U.S. employment. A company that decides to expand its operations by producing some services offshore, but does not reduce its U.S. workforce, would be included in this region. Whether or not this constitutes offshoring depends on whether the displacement of U.S. jobs is a factor in the definition of offshoring. Region D is similar to region C, except that the imported services are supplied by an unaffiliated company offshore (rather than a U.S. affiliate). Regions E and F are captured in broad definitions of offshoring that focus on the movement of services production offshore through investment, but don’t focus on this production returning to the United States in the form of imports. Region F involves the case in which the offshore production actually displaces U.S. exports in the foreign market. That is, the product was previously produced in the United States and exported, but now it is produced by a U.S. company offshore and sold offshore. The term “offshoring” is sometimes used synonymously with the term “outsourcing.” However, outsourcing means acquiring services from an outside (unaffiliated) company, which can be either another domestic company or an offshore supplier. In contrast, a company can source offshore services from either an unaffiliated foreign company (offshore outsourcing) or by investing in a foreign affiliate (offshore in-house sourcing). In the latter case, the services supplied by the company’s foreign affiliate would not be considered outsourcing since the company has an ownership stake in both the U.S. and foreign operations. Figure 14 demonstrates the difference between outsourcing and offshoring. Example: Company uses services supplied by its own foreign-based affiliate (subsidiary) Trade in services data are cross-border transactions between U.S. residents and foreign residents and cover affiliated and unaffiliated transactions. Affiliated transactions consist of intrafirm trade within multinational companies—specifically, the trade between U.S. parent companies and their foreign affiliates and between U.S. affiliates and their foreign parent groups. Unaffiliated transactions are with foreigners that neither own, nor are owned by, the U.S. party to the transaction. Cross-border trade in private services comprises five broad categories used in U.S. International Transactions Accounts (ITAs)—travel, passenger fares, “other transportation,” royalties and license fees, and “other private services.” Other private services, the focus of this report, include affiliated and unaffiliated services. The unaffiliated services consist of six major categories: education; financial services; insurance; telecommunications; business, professional, and technical services; and other unaffiliated services. Business, professional, and technical (BPT) services is further subdivided into several categories of particular interest in discussions of offshoring. Table 8 shows the value of unaffiliated U.S. exports and imports of BPT categories for selected U.S. trade partners. The United States maintained a trade surplus in categories of BPT services in 2002. For example, U.S. exports were more than $3 billion in computer and data processing services, compared with a little over $1 billion in U.S. imports. Table 9 presents the relative shares among these trade partners in exports and imports of BPT services. To prepare the estimates of other private services, the Bureau of Economic Analysis (BEA) conducts benchmark and four annual surveys of cross- border trade with unaffiliated foreigners that cover (1) selected services (mainly miscellaneous business, professional, and technical services), (2) construction, engineering, architectural, and mining services, (3) insurance, and (4) financial services. Beginning in 2004, BEA began the collection of quarterly data that cover services for which data previously were collected annually. These services include detail of business, professional, and technical services, such as computer and data processing and legal and operational leasing services; financial and insurance services; and telecommunication services. Separate surveys are conducted by BEA to collect cross-border trade with affiliated foreigners. Quarterly data are collected on all other private services; annual and benchmark data are collected (usually about every 4 to 5 years) for insurance; financial; computer and information; management and consulting; research, development, and testing; and other services. Quarterly estimates of other private services are released about 75 days after the end of the reference quarter as part of the U.S. International Transactions Accounts. These estimates consist of six types of services for transactions with unaffiliated foreigners and a single estimate for transactions with affiliated foreigners. These estimates are subject to revision 90 days later and each June, as part of historical ITA revisions. The initial quarterly estimates of services transactions with unaffiliated foreigners are based on past trends, supplemented with data from other sources. The initial estimates of services transactions with affiliated foreigners are based on quarterly BEA surveys. In the first June revision, annual estimates for the past year are revised to reflect preliminary results of (1) an annual survey of transactions with unaffiliated foreigners and (2) annual data on transactions with affiliated foreigners. In the following June revision, more complete survey results are incorporated. However, the detailed types of services for transactions with both unaffiliated and affiliated foreigners, as well as country data, are not released until October of each year. For example, the latest year for which we had annual survey- based detail was 2002. In addition to the lack of quarterly survey data for unaffiliated transactions and lack of quarterly product detail for affiliated services, there are reliability issues related to the mandatory filing requirements and survey coverage. Under regulations implementing the International Investment and Trade in Services Survey Act, U.S. persons and intermediaries are required to furnish reports that are necessary to carry out BEA surveys and studies provided for by the Act. Reporting annual transactions with unaffiliated foreigners is required for transactions of over $1 million in any one kind of service; the same size transaction is used for the benchmark survey. Respondents whose transactions fall below this level must report the total level of transactions in all services. For transactions with affiliated foreigners, the limitations are expressed in terms of the size of the affiliate. Quarterly and annual reporting are required only for affiliates whose total assets, sales, or net income exceed $30 million. Although the services surveys are mandatory, the mailing list BEA uses is constructed from publicly available information and not from a comprehensive business register such as those used by BLS and the Census Bureau for their surveys. Consequently it is likely that BEA’s coverage of small or new firms is limited. Finally, for transactions between affiliated firms, there are questions about the reliability of the prices used to value these intrafirm transfers. A standard method for measuring data reliability is to compare initial estimates with subsequent revised estimates. This approach assumes that estimates based on benchmark surveys are more reliable than estimates based on annual estimates that, in turn, are more reliable than estimates based on quarterly surveys. Thus, for the ITAs, preliminary quarterly estimates are released about 75 days after the end of the reference quarter and a “first revision” to these estimates occurs 90 days later. The following June, a historical revision is completed. These historical revisions usually cover the preceding 4 years and reflect the incorporation of more reliable source data, such as more complete or new survey data, as well as changes in definitions, data sources, and estimating procedures. In accordance with the requirements of OMB’s Statistical Policy Directive Number 3, “Statistical Policy Directive on Compilation, Release, and Evaluation of Principal Federal Economic Indicators,” BEA recently prepared a report evaluating the accuracy of the ITAs. This evaluation, which covered the period from first quarter of 1999 to the fourth quarter of 2001, reported that large changes were made to the preliminary and first revised quarterly estimates with the release of historical revisions. However, the revision also reported that the changes primarily reflected major improvements to the accounts that were concentrated in the services and income accounts. According to BEA, “This study provides support for the observations that only relatively small revisions are made to the accounts in the 90 days following publication of the initial estimates, and that more sizable changes occur at the time of the first June estimate.” For example, the report cited the incorporation of BEA’s benchmark surveys of services as a major source of historical revision. The U.S. Bureau of Economic Analysis (BEA) collects data on an annual basis from U.S. multinational companies (MNCs). The data provide detail on U.S. foreign direct investment (FDI) abroad and the operations of U.S. multinational companies and their majority and minority-owned affiliates (e.g., assets, sales and purchases, employment). Table 10 presents information on U.S. FDI across countries for 2002. It also provides the growth rate of this investment from 1999 to 2002 and the share of investment in the manufacturing; information; and professional, scientific, and technical industries. Data on U.S. MNC parent companies’ operations in the United States, which lag by a year the data on direct investment, have the potential to determine the extent to which these companies are using offshore goods and services in their production. (See table 11.) The data show that U.S.- based operations have tended to increase their outsourcing over time, particularly in parent companies classified in manufacturing industries. However, these data do not indicate whether the outsourcing is to purchase goods or services or whether domestic or offshore companies are supplying the outsourcing. For example, in the manufacturing sector, the degree to which U.S. multinational companies are using intermediate inputs in their domestic production has risen from under 60 percent in the 1980s to over 70 percent in 2001. Industries such as the information industry and the professional, scientific, and technical industry are outsourcing around 50 percent of their production value. The Bureau of Economic Analysis has reported that it is evaluating the feasibility of preparing estimates of indirect purchases from offshore suppliers; it already collects data on direct purchases from offshore suppliers. BEA data on MNCs and their affiliates have limitations relating to firm and item coverage, timeliness, and frequency. The reliability of the BEA data on MNCs relates both the exemption levels of the annual and benchmark surveys and the collection of additional detail in the benchmark surveys. To a large extent, the limitations and reliability of these BEA data relate to efforts to restrict respondent burden as required under provisions of the Paperwork Reduction Act of 1995. With regard to coverage, annual BEA surveys exclude banking activities of both U.S. and foreign MNCs and provide no data on employment by occupation of U.S. MNCs or their foreign affiliates. In addition, because some U.S. MNCs may be foreign owned, there is some duplication between the data on U.S. parent companies and on U.S. affiliates. BEA recommends that data on U.S. parents should not be added to U.S. affiliates to produce U.S. totals. Certain data items relevant to offshoring, such as trade in selected services, are collected only in benchmark years and do not cover all types of services. In addition, in the benchmark survey, data on sales of goods and services by country of destination are not collected for minority- owned affiliates and small majority-owned affiliates. With regard to timeliness and frequency, BEA data on MNC operations are not available quarterly, and annual data become available with a 2-year lag. For example, when this report was completed, the latest year for which we had annual survey-based detail was 2001. These estimates are subject to revision when the results of benchmark surveys are incorporated. The most recent benchmark data on U.S. MNCs and their foreign affiliates are for 1999. The most recent data on U.S. affiliates of foreign MNCs are for 1997. Results of the 2002 benchmark survey are scheduled to be released later this year. In 2004, BEA initiated an effort to improve the timeliness of these data. In April 2004, BEA released summary estimates for 2002 of employment, sales, and capital expenditures by U.S. MNCs and their foreign affiliates, and by U.S. affiliates of foreign MNCs. The 2002 estimates to be released later in 2004 will be based on more complete source data and include country and industry detail. As noted above, the reliability of the BEA data on MNCs relates primarily to the exemption levels of both the annual and benchmark surveys, which, in turn, relate to efforts to restrict respondent burden. There are also several other reliability issues with the MNC data collected by BEA that could impact the data related to offshoring. The exemption levels for the reporting of affiliates in the annual surveys are based on the affiliates’ total assets, sales, or net income. For majority-owned affiliates, detailed reporting is required if either of these is greater than $100 million; less detailed reporting is required for majority-owned affiliates with between $30 million and $100 million. For minority-owned affiliates, reporting is required if any of the three items is greater than $30 million. In the benchmark survey, the exemption limit for the short form for majority or minority-owned affiliates is $7 million of total assets, sales, or net income. This also means that smaller affiliates are covered only once every 5 years, so that the trends in the annual data would be misstated to the extent that the trends for smaller affiliates differ from the larger ones. For example, if there were rapid increases in smaller affiliates relating to an increase in offshoring, the annual trends would understate the growth of employment in foreign affiliates of U.S. MNCs. Other reliability issues relate to the universe frame used by BEA to ensure complete reporting. Although the MNC surveys are mandatory regardless of whether a firm receives a form, the mailing list used by BEA is constructed from publicly available information and not from comprehensive business registers such as those used by BLS and the Census Bureau for their own surveys. Consequently, it is possible that BEA’s coverage of small and new firms is limited. In addition, the data reported to BEA are based largely on financial accounting records, and in recent years many of these earlier records have been restated. BEA has not reported that it has been obtaining revised reports from these firms. Although these restatements would impact on the reported profits data, it is not likely that they would affect the employment data. In addition to the contact named above, Scott Farrow, Robert Parker, Linda Lambert, Yesook Merrill, Andrew Sherrill, Tim Wedding, Frank Maguire, Richard Seldin, Judith Knepper, Patricia Slocum, Carmen Donohue, Mark Fostek, Bradley Hunt, Katrina Ryan, and Yunsian Tai made major contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
|
Much attention has focused on the topic of "offshoring" of information technology (IT) and other services to lower-wage locations abroad. "Offshoring" of services generally refers to an organization's purchase from other countries of services that it previously produced or purchased domestically, such as software programming or telephone call centers. GAO was asked to (1) describe the nature of offshoring activities and the factors that encourage offshoring, (2) discuss what U.S. government data show about the extent of this practice by the private sector and federal and state governments, and (3) discuss available data on the potential effects of services offshoring on the U.S. economy. No commonly accepted definition of "offshoring" exists, and the term has been used to include various international trade and foreign investment activities. Services that U.S.-based organizations purchase from abroad are considered imports. They may also be linked to U.S. firms' investments overseas--for example, U.S. firms may invest in overseas affiliates as a replacement for, or as an alternative to, domestic production. In recent years, services offshoring has been facilitated by factors, such as the Internet, infrastructure growth in developing countries, and decreasing data transmission costs. Organizations' decisions to offshore services are influenced by potential benefits such as the availability of cheaper skilled labor and access to foreign markets, and by risks, such as geopolitical issues and infrastructure instability in countries that supply the services. U.S. government data provide some insight into the extent of services offshoring by the private sector, but they do not provide a complete picture of the business transactions that the term offshoring can encompass. Department of Commerce data show that private sector imports of some services are growing. For example, imports of business, professional, and technical services increased by 76.8 percent from $21.2 billion in 1997 to $37.5 billion in 2002. From another perspective, Commerce's data also show that in 2002 U.S. investments in developing countries that supply offshore services were small compared to those in developed countries and that most services produced abroad are sold primarily to non-U.S. markets. Regarding public sector offshoring, the total dollar value of the federal government's offshore services contracts increased from 1999 through 2003, but the trend in the dollar value shows little change relative to all federal services contracts. No comprehensive data or studies show the extent of services offshoring by state governments. Government data provide limited information about the effects of services offshoring on U.S. employment levels and the U.S. economy. The Department of Labor's Mass Layoff Survey data show that layoffs attributable to overseas relocation represent a small fraction of overall total mass layoffs. However, the survey identifies only a portion of total layoffs because the survey does not cover establishments with fewer than 50 employees. Other government data show greater than average job declines since 2001 in occupations and industries commonly associated with offshoring, but other factors, such as the recent recession, may contribute to these declines. Some private researchers predict that offshoring may eliminate 100,000 to 500,000 IT jobs within the next few years, while others note that offshoring can also generate benefits, such as lower prices, productivity improvements, and overall economic growth.
|
Over the last 40 years, Congress has considered several business models to provide postal services to the nation and moved USPS toward a more businesslike entity but has simultaneously placed constraints on its operations. Until 1970, the federal government provided postal services via the U.S. Post Office Department, a government agency that received annual appropriations from Congress. Congress was involved in many aspects of the department’s operations, including the selection of managers (e.g., postmasters), and in setting postal rates and wages. A presidential commission (The Kappel Commission) reported to the President in 1968 on the crisis facing the department, which included financial losses, management problems, service breakdowns, low productivity, and low employee morale. The Kappel Commission’s basic finding was that “the procedures for administering the ordinary executive departments of Government are inappropriate for the Post Office.” Furthermore, it concluded that “a transfer of the postal system to the private sector is not feasible, largely for reasons of financing…but the possibility remains of private ownership at some future time, if such a transfer were then considered to be feasible and in the public interest…. We recommend, therefore, that Congress charter a Government-owned Corporation to operate the postal service. The corporate form would permit much more successful operation of what has become a major business activity than is possible under present circumstances.” The Postal Reorganization Act (PRA) of 1970 replaced the department with the current USPS model—an independent establishment of the executive branch designed to be self-sustaining by covering its operating costs with revenues generated through the sales of postage and postal- related products and services. USPS receives no appropriations for purposes other than revenue forgone on free and reduced rate mail. In 1996, Congress again began considering the merits of postal reform and ultimately enacted PAEA in 2006. A number of factors encouraged reform, including financial challenges, such as growing cash-flow problems and debt. A second presidential commission examined USPS’s future and issued a report in 2003 that recommended a number of actions to ensure the viability of postal services. Additionally, the Postal Civil Service Retirement System Funding Reform Act of 2003 was enacted after OPM determined that USPS was overfunding its employees’ pensions. This law required the amounts achieved by reducing the previous pension contributions to be used toward USPS’s debt to the U.S. Treasury and set aside any remaining amounts in an escrow account. Congress addressed how the escrowed funds should be used—along with many of USPS’s other financial and operational challenges—in PAEA. Key requirements and flexibilities provided in PAEA are detailed in table 1. PAEA also made changes to USPS’s regulatory and oversight structure. In addition to responsibilities for reviewing pricing and nonpostal services described in table 1, the newly created PRC gained additional oversight responsibilities, including responsibility for making annual determinations of USPS compliance with applicable laws, developing accounting practices and procedures for USPS, reviewing the universal service obligation, and providing transparency through periodic reports. The USPS Board of Governors, which has responsibilities similar to a board of directors of a publicly held corporation, directs the exercise of the powers of USPS, directs and controls its expenditures, reviews its practices, conducts long- range planning, and sets policies on all postal matters. PAEA added new qualifications and lengths of term for new board members. USPS’s business model is not viable due to its inability to reduce costs sufficiently in response to continuing declines in mail volume and revenue. Mail volume declined 36 billion pieces over the last 3 fiscal years, 2007 through 2009, due to the economic downturn and changing use of the mail, with mail continuing to shift to electronic communications and payments. USPS lost nearly $12 billion over this period, despite achieving billions in cost savings, reducing capital investments, and raising rates. However, USPS had difficulty in eliminating costly excess capacity, and its revenue initiatives had limited results. To put these results into context, until recently, USPS’s business model benefited from growth in mail volume to help cover costs and enable it to be self-supporting. In each of the last 3 fiscal years, USPS borrowed the maximum $3 billion from the U.S. Treasury and incurred record financial losses (see fig. 1). A looming cash shortfall led to congressional action at the end of fiscal year 2009 that deferred costs by reducing USPS’s mandated retiree health benefit payment. Looking forward, USPS projects continued mail volume decline and financial losses over the next decade. In fiscal year 2009, USPS’s mail volume declined to 17 percent below its peak of 213 billion pieces in fiscal year 2006. USPS projects that total mail volume will decline to 167 billion pieces in fiscal year 2010—the lowest level since fiscal year 1992 and 22 percent less than its fiscal year 2006 peak. USPS and many mailers who provided information for this study do not expect volume to return to its former levels when the economy recovers. By fiscal year 2020, USPS projects further volume declines of 15 percent to about 150 billion pieces, the lowest level since fiscal year 1986 (see fig. 2). First-Class Mail volume has declined 19 percent since it peaked in fiscal year 2001, and USPS projects that it will decline by another 37 percent over the next decade (see fig. 3). This mail is highly profitable and generates over 70 percent of the revenues used to cover USPS overhead costs. Standard Mail (primarily advertising) volume has declined 20 percent since it peaked in fiscal year 2007, and USPS projects that it will remain roughly flat over the next decade. This class of mail is profitable overall but lower priced, therefore, it takes 3.4 pieces of Standard Mail, on average, to equal the profit from the average piece of First-Class Mail. Standard Mail volume was affected by large rate increases in 2007 for flat- sized mail, such as catalogs, and by the recession that affected advertising, such as mortgage, home equity, and credit card solicitations. These solicitations appear unlikely to return to former levels. Standard Mail also faces growing competition from electronic alternatives, increasing the possibility that its volume may decline in the long term. One reason that mail volumes declined is because businesses and consumers have moved to electronic payment alternatives over the past decade (see fig. 4). Looking forward, the use of electronic alternatives for communications and payments, including broadband and mobile technology, is expected to continue to grow. Nearly two-thirds of American households had broadband service in fiscal year 2008, up from 4.4 percent in less than a decade (see fig. 5). Expanded availability and adoption of broadband technology is being facilitated by federal spending under the American Recovery and Reinvestment Act. USPS achieved nearly $10 billion in cost savings in the 3 fiscal years 2007 through 2009, primarily by cutting nearly 201 million work hours. Work- hour savings were achieved by workforce reductions of over 84,000 full- and part-time employees, primarily through retirements; reduced overtime; and changes to postal operations. For example, USPS reached agreement with the National Association of Letter Carriers to realign delivery routes, and with the American Postal Workers Union and the National Postal Mail Handlers Union on early retirement incentives. However, USPS’s cost savings and added revenue from rate increases and other actions to generate revenues were insufficient to fully offset the impact of declines in mail volume and rising personnel-related costs. Thus, USPS revenues declined by $4.7 billion during this period of time, while its costs declined $7 million. USPS also has large financial liabilities and obligations that totaled over $88 billion in fiscal year 2009. Over the last 2 fiscal years, total liabilities and obligations have increased by nearly $14 billion (see table 2). USPS debt to the U.S. Treasury, over this same period, increased by $6 billion and pension obligations changed by over $8 billion—from a $5.3 billion surplus to $2.8 billion in unfunded obligations. To put these liabilities and obligations into context, they increased from 100 percent of USPS revenues in fiscal year 2007 to 130 percent of revenues in fiscal year 2009. Declines in mail volume and revenue, large financial losses, increasing debt, and financial obligations will continue to challenge USPS. For fiscal year 2010, USPS is projecting a record loss of over $7 billion and additional pressures to generate sufficient cash to meet its obligations. Furthermore, it has halted construction of most new facilities and has budgeted $1.5 billion in capital cash outlays (mostly for prior commitments), which is down from the average of $2.2 billion in the previous 5 fiscal years. USPS also expects to borrow $3 billion in fiscal year 2010, which would bring its total outstanding debt to $13.2 billion, close to its $15 billion statutory limit. Looking forward, USPS projects that, absent additional action, annual financial losses will escalate over the next decade to $33 billion in fiscal year 2020 (see fig. 6). According to USPS, its projected losses will result from declining mail volume, stagnant revenue (despite rate increases), large costs to provide universal service, and rising workforce costs. These projections are the most pessimistic in many years. Stakeholder interviews reinforce the conclusion that the recent recession was a “tipping point” that has accelerated the diversion of mail to electronic alternatives, particularly among business mailers who generate the most mail volume and revenues, leading to sobering financial results. Making progress toward USPS’s financial viability would primarily involve taking action on strategies and options to rightsize operations, cut costs, and increase revenues. USPS does not need—and cannot afford to maintain—its costly excess infrastructure capacity. USPS has achieved noteworthy cost reductions, but much more progress is needed. Making the necessary progress would require (1) taking more aggressive actions to reduce costs and increase revenues within its current authority, using the collective bargaining process to address wages, benefits, and workforce flexibility, and (2) congressional action to address legal restrictions and resistance to realigning USPS operations, networks, and workforce. Key strategies and options, some of which would require statutory changes, fall into the following three major categories: reducing compensation and benefits costs, reducing other operations and network costs and improving efficiency, and generating revenues through product and pricing flexibility. Ultimately, Congress may want to examine other options that would alter the ownership structure of USPS. For example, USPS might be moved back to being a federal agency funded in part by taxpayer support, or it might be moved to a corporate model. This report does not address the ownership issue because of an array of functional and operational options—discussed throughout this report—that need to be examined immediately. The resolution of some of these more pressing issues might afford a better understanding of whether the ownership structure should be modified. USPS has options to reduce its compensation and benefits costs in the following four key areas: workforce size, to be aligned with reduced workload; wages, which continue to be a key component of costs; benefits, which in some cases are more generous than those provided by other federal agencies; and workforce flexibility, including the mix of full- and part-time employees and work rules that govern what tasks employees can perform. Changes in these areas would need to be negotiated with employee unions and would involve tradeoffs between reducing costs and addressing union concerns that reducing workforce size and compensation and benefits would erode the number of well-paying jobs. About 85 percent of USPS employees are covered by collective bargaining agreements, which correspond with major crafts (see table 3). USPS and its employee unions will begin negotiations for new agreements in 2010 and 2011. If USPS and its unions are unable to agree, binding arbitration by a third-party panel will ultimately be used to establish agreement. USPS is also required to consult with its management associations that represent postmasters and supervisors. About 78 percent of USPS employees are full time and receive salary increases and cost-of-living adjustments based on predetermined levels. These employees are generally scheduled in 8-hour shifts and can earn overtime pay, except for rural mail carriers, who are generally paid a salary without overtime. Managers are not covered by collective bargaining agreements and are compensated under a pay-for- performance program. About 90 percent of city carriers are full time, while about 55 percent of rural carriers are full time. USPS has not achieved significant reductions in compensation and benefits, in part due to the following challenges: USPS is required by law to maintain compensation and benefits comparable to the private sector. The application of the comparability standard to postal employees—that is, whether a compensation premium exists between postal employees and private-sector employees who do comparable work—has been a source of disagreement between management and postal unions in negotiations and interest arbitration. Career USPS employees participate in federal pension and benefits programs, including health care and life insurance. USPS collective bargaining agreements include provisions to reduce USPS’s contribution to health care premiums by 1 percent a year from 85 percent in fiscal year 2007 to 81 percent in 2011 or 80 percent in 2012, depending on the agreement. Nevertheless, USPS covers a higher proportion of employee premiums for health care and life insurance than most other federal agencies. The law requires USPS’s fringe benefits to be at least as favorable as those in effect when the PRA of 1970 was enacted. USPS is also required by law to participate in the federal workers’ compensation program and ensure coverage for injured employees. Some benefits provided under the federal program exceed those provided in the private sector. For example, injured USPS employees with dependents receive 75 percent of their salary compared with the 66 percent of pay private employers covered under state workers’ compensation laws typically provide. Furthermore, USPS employees receiving this benefit often do not opt to retire when eligible, staying permanently on the more generous workers’ compensation rolls. Current collective bargaining agreements include provisions related to compensation, leave, workforce composition, and work rules. They also include some provisions that allow USPS to make changes, such as relocating employees, but other provisions limit USPS’s flexibility to manage work efficiently and rightsize its workforce. For example, current collective bargaining agreements limit the percentage of part-time and contract workers who help USPS match its workforce to changing workload; limit managers from assigning work to employees outside of their crafts, such as having a retail clerk deliver mail; limit outsourcing for city delivery routes; and contain “no-layoff” provisions for about 500,000 employees and require USPS to release lower-cost part-time and temporary employees before it can layoff any full-time workers without layoff protection. Currently, if the collective bargaining process reaches binding arbitration, there is no statutory requirement for USPS’s financial condition to be considered. In 2009, proposed Senate legislation included language that would require any binding arbitration in the negotiation of postal contracts to take the financial health of the Postal Service into account. The 2003 President’s Commission reported that “far more than individual benefits, the size of the workforce determines the cost of the workforce.” USPS has worked to reduce the size of its workforce through regular retirements and early retirements in response to recent separation incentives and through a hiring freeze. USPS’s workforce of career and noncareer employees declined by nearly 21 percent—from 901,238 at the end of fiscal year 2000 to 712,082 at the end of fiscal year 2009 (see fig. 7). Career employees continued to comprise most of the total workforce throughout this period. USPS has a window of opportunity to reduce the cost and size of its workforce through the large number of upcoming retirements, minimizing any need for layoffs. In this regard, about 5 percent of USPS employees will be eligible and expected to retire each year through 2020—a total of approximately 300,000 employees. Key issues include what size workforce is needed to reflect changes in mail volumes, revenues, and operations; how quickly changes can be made in this area; whether separation incentives should be offered and are affordable; and to what extent and under what terms should outsourcing be considered. Options to reduce the size of USPS’s workforce include the following: Retirement and separation incentives: According to USPS officials, incentives could accelerate the rate of attrition, but it needs to have sufficient cash to fund them. Outsourcing: Determine which functions would be cost-effective to outsource (using companies or individuals). At the end of fiscal year 2009, USPS had about 36,500 retail facilities, 3,000 of which were contract postal units and 800 of which were community post offices staffed by nonpostal employees. USPS also has long outsourced most of its long-distance air and ground transportation. In delivery operations, contractors deliver to less than 2 percent of USPS’s delivery points. Postal labor unions and some Members of Congress have previously resisted outsourcing. For example, after USPS attempted to contract out some city delivery routes in 2007, legislation was introduced in both Houses of Congress on this matter. USPS and the National Association of Letter Carriers subsequently agreed to a moratorium on outsourcing city carrier delivery through November 2011. Looking forward, the outsourcing issue could involve consideration of the tradeoffs between the loss of government jobs paying middle-class wages and benefits to achieve savings by shifting the work to private-sector jobs that may pay lower wages and not have guaranteed benefits. Layoffs: USPS could implement layoffs as a last resort if it has too few positions to offer employees affected by restructuring. For example, USPS could implement layoffs as part of shifting from 6-day delivery to 5-day delivery. However, under current collective bargaining agreements, any layoffs of covered employees not protected by no-layoff clauses must first be applied to noncareer employees, such as temporary employees, whose average wages are less than full-time career employees. USPS wages were $39 billion in fiscal year 2009—about one-half of its costs. Increasing wages have been a key driver of additional costs, expected to add $1 billion in fiscal year 2010. Wages have traditionally increased on the basis of cost-of-living allowances keyed to the Consumer Price Index. Rising wages also increase benefit costs, such as pensions. Key issues include how USPS can improve its compensation systems to balance the need for fair compensation with reducing costs and increasing incentives to become more competitive. In this regard, a recent legislative proposal would have required that USPS’s financial condition be considered if collective bargaining reaches binding arbitration. One option would be a two-tier pay system that would pay new hires lower wages, while “grandfathering” current employees under the current pay structure. USPS makes payments to fund its liabilities and obligations for retiree health and pension benefits, health and life insurance premiums, and workers’ compensation. Benefits cost USPS almost $17 billion in fiscal year 2009, over 23 percent of its total costs. The cost would have been nearly $21 billion if Congress had not reduced USPS payments for retiree health benefits by $4 billion to address a looming cash shortfall. Key issues are assigning financial responsibility for benefits to USPS, its employees, and current and future ratepayers and balancing USPS’s poor financial condition, while keeping rates affordable, meeting legal requirements for employee benefits, and minimizing risk to the taxpayer if USPS would be unable to meet its responsibilities. According to OPM estimates, at the end of fiscal year 2009, the actuarially determined obligation for USPS’s future retiree health benefits was about $87.5 billion. At that time, the dedicated Postal Service Retiree Health Benefits Fund (the RHB Fund) had a balance of $35.5 billion, and, therefore, unfunded obligations of $52.0 billion remained. These unfunded obligations developed largely because, prior to the enactment of PAEA in 2006, USPS financed its share of the health insurance premiums for its retirees on a pay-as-you-go basis, rather than on the annual accrued cost of future benefits attributable to the service of current employees. PAEA required USPS to begin prefunding its retiree health benefit obligations with annual payments to the RHB Fund, while continuing to pay its share of the retiree health premiums of current retirees to the Federal Employees Health Benefits Fund (the FEHB Fund). Since PAEA was enacted, mail volume has declined, USPS’s financial condition has deteriorated, and it has had difficulty in making its required payments to prefund its retiree health benefit obligations. In fiscal year 2009, a looming cash shortfall led to last-minute congressional action that deferred costs by reducing USPS’s required prefunding payment from $5.4 billion to $1.4 billion. At the end of fiscal year 2009, USPS had about 463,000 annuitants and survivors participating in the Federal Employees Health Benefits Program. Furthermore, 162,000 USPS career employees are eligible for regular retirement this fiscal year, and this number is projected to increase to about 300,000 career employees over the next decade. For fiscal year 2010, USPS has reported that it is “highly uncertain” whether it will have sufficient cash to cover its required prefunding payment of $5.5 billion that is due by September 30, 2010. According to USPS’s fiscal year 2010 budget, by making the required prefunding payment, it will end the fiscal year with a cash balance of only $200 million. However, USPS officials have said that this cash balance would likely be inadequate to finance operations in October 2010, when it must make three payroll payments of close to $2 billion each, as well as a payment for workers’ compensation costs expected to exceed $1 billion. In response to these likely conditions, USPS has requested that Congress revise the required schedule for retiree health benefits payments as part of a package to improve its financial viability. There are multiple options for funding USPS’s retiree health benefit obligations. In addition to the current prefunding approach, where the obligations are paid prior to when USPS’s share of retiree health premiums are due, there are two broad approaches—(1) a “pay-as-you-go” funding approach, where USPS’s share of retiree health premiums are paid as they are billed for current retirees, and (2) an actuarial funding approach, where payments include amounts for “normal costs” to finance the future retiree health benefits attributed to the service of current employees and amortization amounts to liquidate unfunded obligations over a 40-year period. The impact of these various approaches on USPS’s payments would depend on whether its share of retiree health premiums would be paid directly by USPS to the FEHB Fund or whether the premiums would be paid from the RHB Fund. Depending on which option is selected, changes could also impact the federal budget deficit. PAEA’s approach to funding USPS’s retiree health benefit obligations is a combination of the prefunding and pay-as-you-go approaches that we have previously described. Specifically, PAEA requires USPS to make two payments annually over fiscal years 2010 through 2016: a payment to the FEHB Fund to cover its share of the premiums for a statutorily determined payment to the RHB Fund to prefund obligations for future retirees. Starting in fiscal year 2017—after the last statutorily scheduled prefunding payment—PAEA requires that USPS’s share of retiree health premiums be paid from the RHB Fund and requires OPM to determine future payments to the RHB Fund. Each annual payment to the RHB Fund starting in fiscal year 2017 will be the sum of the two amounts that finance the following: the annual accrued cost of future benefits attributable to the service of current USPS employees, which OPM refers to as “normal costs,” and amortization payments over 40 years to liquidate any unfunded obligations. Table 4 shows USPS payments from fiscal years 2010 through 2020, based on updated estimates that OPM provided to us for this report. Total USPS payments are estimated to increase from $7.8 billion in fiscal year 2010 to $10.3 billion in fiscal year 2016. The payments are estimated to decline to $6.4 billion in fiscal year 2017 and increase to $7.3 billion in fiscal year 2020. Based on GAO analysis, assuming that USPS made these payments through 2020, estimated unfunded obligations of about $33 billion would remain. In 2009, proposed legislation was introduced in both houses of Congress that would have revised the payment schedule for postal retiree health benefits. The House legislation (H.R. 22) would have shifted responsibility for payments for current retiree health premiums from USPS to the RHB Fund for fiscal years 2009 through 2011. Such action would result in USPS needing to pay additional amounts to the RHB Fund in the future due to the use of those RHB funds for current retiree health premiums. The Congressional Budget Office (CBO) estimated that enacting the House legislation would have a net cost to the federal budget of $2.5 billion over fiscal years 2010 through 2019. The Senate legislation (S. 1507) would have extended and revised prefunding payments to the RHB Fund, with the payment amounts increasing from $1.7 billion in fiscal years 2009 and 2010 to $5.3 billion in fiscal year 2019. CBO estimated that enacting S. 1507 would have a net cost to the federal budget of $2.8 billion over both fiscal years 2010 through 2019 and fiscal years 2009 through 2014. Ultimately, Congress acted at the end of September 2009 to reduce costs by deferring USPS’s prefunding payment for retiree health benefits in fiscal year 2009 by $4 billion. We strongly support the principle that USPS should continue to fund its retiree health benefit obligations to the maximum extent that its finances permit. Deferrals of funding such benefits would serve as financial relief. Such deferrals, however, increase the risk that in the future USPS will not be able to pay these obligations as its core business continues to decline and if sufficient actions are not taken to restructure operations and reduce costs. With these considerations, the current statutory approach for funding USPS’s retiree health benefit obligations can be revised along the lines of the two broad approaches to funding retiree health obligations— pay-as-you-go and actuarial. The approaches vary in the amount of annual payments, which, in turn, impact the unfunded obligation, lower annual payments, and result in higher unfunded obligation balances. For comparison purposes, we present the estimated unfunded balance for USPS’s retiree health obligations in fiscal year 2020. These approaches to revising the current statutory approach are presented in the following text to illustrate the wide range of possible options. Approach #1: Pay-as-you-go approach to funding retiree health benefit obligations In March 2010, USPS proposed “to shift to a ‘pay-as-you-go’ system [for its retiree health benefits], paying premiums as they are billed” for current retirees. Estimated annual USPS payments under one possible pay-as-you- go approach are shown in table 5. Under this approach, USPS would make payments to the FEHB Fund for its share of retiree health premiums. The RHB Fund would not make or receive payments, but would continue to earn interest. Based on GAO analysis, USPS’s unfunded obligations would be an estimated $99 billion in fiscal year 2020, or about $66 billion more than they would be under current law. This level of unfunded obligations would increase the risk that, absent future events that could reduce USPS’s retiree health premiums, USPS’s operations in the future may not be able to support the future payments that are expected. However, in such a circumstance, a mechanism could be created to pay a portion of premium payments from the assets that have accumulated in the RHB Fund once a threshold was reached, such as when the pay-as-you-go premium payments reach a particular percentage of postal revenues. Using the RHB Fund to pay a portion of retiree health premiums would reduce USPS’s payments to the FEHB Fund and increase USPS’s unfunded obligations by a corresponding amount. Such a mechanism could, if implemented carefully, provide some assistance to USPS in meeting its obligation to pay retiree health premiums. Different variations on a “pay-as-you-go” approach are also possible, such as using the RHB Fund to pay USPS’s share of retiree health premiums for current retirees until the RHB Fund is exhausted and then reverting to USPS funding future premiums from its operations by paying the FEHB Fund directly. Under this alternative, USPS’s payments would be suspended until the RHB Fund is exhausted, which would be approximately fiscal year 2025. Approach #2: Actuarial approach to funding retiree health benefit obligations An actuarial funding approach for USPS retiree health benefit obligations could provide a financing mechanism that allows the RHB Fund to remain self-sustaining in the long term. Under one such approach, unfunded retiree health benefit obligations would be reamortized starting in fiscal year 2010, instead of fiscal year 2017, as required under current law. Specifically, starting in fiscal year 2010, USPS would make payments to the RHB Fund that finance the following: the annual accrued cost of future benefits attributable to the service of current USPS employees, which OPM refers to as “normal costs,” and amortization payments over 40 years to liquidate any unfunded obligations. Under this actuarial funding approach, USPS would make annual estimated payments that total about $80 billion from fiscal years 2010 through 2020 (see table 6). Based on GAO analysis, in fiscal year 2020, the estimated unfunded obligations under this method would be about $48 billion, or about $15 billion more than they would be under current law. PAEA’s funding requirements represent a significant financial commitment for USPS, especially in light of the current economic environment and the major challenges it faces. As we have testified, we continue to be concerned about those options that would greatly reduce payments in the short term, only to defer payments into the future. Specifically, we are concerned that deferring these payments or some portion into the future increases the risk that USPS may have difficulty in making the future payments, particularly if mail volumes continue to decline. Because its retirees are eligible to receive the same health benefits as other federal retirees, if USPS cannot make its required payments, the U.S. Treasury, and hence the taxpayer, would still have to meet the federal government’s obligations. USPS employees participate in the federal government’s two civilian pension plans—the Civil Service Retirement System (CSRS) and the Federal Employees’ Retirement System (FERS)—that are administered by OPM. As of the end of fiscal year 2009, approximately 80 percent of USPS’s employees were enrolled in FERS, while 20 percent were enrolled in CSRS or the Dual Civil Service Retirement System and Social Security (Dual CSRS). As an agency employer, USPS is required by law to make certain payments to the Civil Service Retirement and Disability Fund (CSRDF) to fund its share of CSRS and FERS pension costs. In addition to providing an annuity at retirement based on years of service and “high-3” average pay, FERS also consists of Social Security and the government’s Thrift Savings Plan (TSP). As such, USPS contributes the employer’s share of Social Security taxes and the required contributions to its employees’ TSP accounts. Because USPS’s pension, Social Security, and TSP contributions are in part a function of employee wages as defined for these programs, changes in total employee wages will have a corresponding effect on USPS’s costs for these items. USPS’s retirement expenses were $5.9 billion in fiscal year 2009. As we have previously mentioned, most USPS employees are full time, can receive overtime pay, and receive pay increases and cost-of- living adjustments as set forth in collective bargaining agreements with various unions. Other USPS employees, typically managers and postmasters, are compensated under pay-for-performance programs. USPS’s ability to reduce the size of its workforce and the number of workhours, the strategies and options for which are described elsewhere in this report, will affect the pension, Social Security, and TSP benefit costs it incurs for most of its employees. Furthermore, the methods and rates at which USPS funds pension benefit costs are set forth in law. In 2002, OPM estimated that, under statutory pension funding requirements applicable to USPS at the time, USPS was on course to overfund its CSRS pension obligations. Congress responded by enacting the Postal Civil Service Retirement System Funding Reform Act of 2003, which changed the prior method of estimating and funding the USPS CSRS pension obligations. The act required USPS to contribute the employer’s share of “dynamic normal cost” to the CSRDF, plus an amount to liquidate any underfunding, or “postal supplemental liability,” both as determined by OPM. In July 2003, OPM submitted to Congress its plan enumerating the actuarial methods and assumptions by which OPM would make its determinations. In 2004, OPM and the Board of Actuaries for the CSRDF reconsidered OPM’s methodology at the request of USPS and concluded that OPM’s methodology was in accordance with congressional intent. OPM also rejected an alternative methodology offered by USPS. In January 2010, the USPS OIG issued a report on funding the USPS’s CSRS pension responsibility. This report asserted that, despite the changes brought about in the 2003 Act, the current method of allocating the pension costs for post-1971 pay increases results in the inequitable allocation of pension obligations to USPS. The USPS OIG proposed an alternative allocation methodology that its actuaries estimated would, if implemented, change the funded status of USPS’s CSRS pension obligations from a current $10 billion underfunding to a $65 billion overfunding. This alternative allocation methodology is the same methodology that OPM rejected in 2004. Application of the USPS OIG’s proposed methodology would result in a shift of pension funding costs from USPS to the U.S. Treasury. Health and life insurance: Health insurance premiums for current employees comprise a growing share of USPS expenses, rising from $2.2 billion (3.5 percent of total expenses) in fiscal year 2000 to $5.3 billion (7.4 percent) in fiscal year 2009. Collective bargaining agreements require USPS to pay a more generous share of employees’ health and life insurance premiums than most other agencies. For example, USPS paid, on average, 81 percent of health benefit premiums in fiscal year 2009 compared with 72 percent by other federal agencies. It also paid 100 percent of employee life insurance premiums, while other federal agencies pay about 33 percent. One option would be to increase employee premium payments for health and life insurance premiums. USPS’s share of the health and life insurance premium payments could be reduced to levels paid by most federal agencies, which would increase the employees’ annual premium payments and, according to USPS estimates, would have saved about $615 million in fiscal year 2009. Workers’ compensation: The 2003 President’s Commission recommended making USPS’s workers’ compensation program more comparable to programs in the private sector to control costs, still provide adequate benefits, and address USPS’s unfunded liability in this area. The commission recommended that USPS be allowed to (1) transition employees receiving workers’ compensation to its pension plan on the basis of when the employee (if not injured) would be retirement eligible and (2) limit benefits from the current 75 percent for employees with dependents to two-thirds of the maximum weekly rate—the rate that applies to employees without dependents. Limitations on the workforce mix of full-time and part-time postal employees and workforce flexibility rules contained in contracts with USPS’s unions are key determinants of how postal work is organized and, thus, of its cost. USPS officials told us that as mail volume declines, it would be more efficient to have a much higher proportion of part-time workers than is currently allowed under the existing agreements. These part-time employees would have flexible schedules and responsibilities and lower pay than full-time career employees. A key issue is how USPS can obtain greater flexibility through the collective bargaining process so that it can adjust its workforce more quickly to adapt to changing volume and revenue. Some options for postal workforce mix and work rules include the following: Part-time workers: Increase the percentage of part-time employees, who could work more flexible schedules, including less than an 8-hour shift. Such flexibility could help match USPS’s workforce to the changing workload, which varies greatly depending on the day of the week and the time of the year. Job Flexibility: Increase the flexibility to use employees in different assignments. Changes in the skill requirements of some jobs and the needs of operations have made it more feasible and necessary for employees to be trained in different tasks and work in different areas, depending on daily needs. Under current collective bargaining agreements, USPS can assign employees to “cross crafts” and perform different duties, but the agreements require managers to consider wage level, knowledge, and experience before asking employees to perform duties outside of their normal purview. Another area where USPS can reduce operational costs is by optimizing its mail processing, retail, and delivery networks; eliminating growing excess capacity and maintenance backlogs; and improving efficiency. Declines in mail volume and continuing automation have increased costly excess capacity that was a problem even when mail volume peaked in fiscal year 2006. USPS no longer needs—and can no longer afford—to maintain all of its retail and mail processing facilities. For example, USPS has reported that it has 50 percent excess plant capacity in its First-Class Mail processing operations. Although USPS has begun efforts to realign and consolidate some mail processing, retail, and delivery operations, additional efforts are urgently needed to overcome obstacles. USPS has faced formidable resistance to facility closures and consolidations because of concerns about how these actions might affect jobs, service, employees, and communities, particularly in small towns or rural areas. According to some Members of Congress and postmaster organizations, among others, post offices are fundamental to the identity of small towns, providing them with an economic and social anchor. Another issue is that inadequate USPS financial resources could impede efforts to optimize postal mail processing, retail, and delivery networks by limiting available funding for transition costs. Reducing operational and network costs would require navigating statutory requirements, regulations, procedures, and service standards, including the following: USPS is required by law to provide adequate, prompt, reliable, and efficient services to all communities, including a maximum degree of effective and regular services in rural areas, communities, and small towns where post offices are not self-sustaining. USPS is specifically prohibited from closing small post offices “solely for operating at a deficit.” Statutory requirements also specify the process and criteria for post office closings, including appellate review by PRC. Also, USPS regulations prescribe processes for closing, consolidating, and relocating post offices. PAEA requires USPS to develop and use procedures for providing public notice and input before closing or consolidating any mail processing or logistics facilities. Appropriations provisions restrict post office closures and mandate 6-day delivery. Service standards drive operations at mail processing facilities. In this regard, PAEA requires USPS to establish and maintain modern delivery standards. USPS standards currently call for delivery of most local First- Class Mail overnight and most long-distance First-Class Mail in 2 to 3 days. A PRC hearing and advisory opinion are required when USPS submits a proposal to make changes that would generally affect service on a nationwide or substantially nationwide basis. In 2006, PAEA encouraged USPS to expeditiously move forward in its streamlining efforts, recognizing that USPS has more processing facilities than it needs. USPS has begun efforts to consolidate some mail processing operations, but much more needs to be done. Since 2005, USPS has closed only 2 of its 270 processing and distribution centers. Over this period, it also has closed some facilities, such as 68 Airport Mail Centers and 12 Remote Encoding Centers. Between fiscal years 2005 and 2009, the Area Mail Processing (AMP) process has been used to implement 13 consolidations, saving a projected $31 million, but 39 under consideration were canceled, according to a recent USPS OIG report. This report also noted that another 16 AMP consolidations have been approved, while 30 remained under consideration. When determining whether to close a particular mail processing facility, key factors include the role of the facility in providing secure and timely delivery in accordance with its service standards as well as the expected cost reductions or productivity gains. Furthermore, we have reported that the process for governing such decisions should be clearly defined and transparent, and include public notice and meaningful engagement with affected communities, mailers, and employees. In 2005, we recommended that USPS enhance transparency and strengthen accountability of realignment efforts to assure stakeholders that such efforts would be implemented fairly and achieve the desired results. We have since testified that USPS took steps to address these recommendations and should be positioned for action. Individual facility decisions are best made in the context of a comprehensive, integrated approach for optimizing the overall mail processing network. Key process issues in this area include how to better inform Congress and the public about the purpose and scope of USPS’s optimization plans, address possible resistance to consolidating operations and closing facilities, and ensure that employees will be treated fairly. Options in the mail processing area include the following: Close major mail processing facilities: The Postmaster General and other stakeholders have recently said that USPS could close many major mail processing facilities while maintaining current standards for timely delivery. Some stakeholders have estimated that roughly over one-half of these facilities are not needed. Relax delivery standards to facilitate closures and consolidations: USPS officials and experts have also noted that additional major processing facilities could be closed if delivery standards were relaxed. For example, one senior USPS official estimated that about 70 processing facilities could be eliminated if local First-Class Mail were to be delivered in 2 days instead of overnight. Introduce a discount for destination-entry of First-Class Mail: Some mailers favor having USPS introduce a discount for entering First-Class Mail at facilities that are generally closer to the mail’s final destination. For mail sent to distant recipients, such destination entry would be expected to bypass some mail processing facilities and some USPS transportation. However, USPS officials told us that they did not believe that USPS could capture the potential cost savings from creating such a discount, because of existing excess capacity. If such a discount were to be applied to mail that is already locally entered—which comprises much First-Class Mail volume—that could reduce revenues with little corresponding cost savings. USPS’s retail network has remained largely static, despite expanded use of retail alternatives and population shifts. USPS continues to provide service at about 36,500 post offices, branches, and stations and has not significantly downsized its retail operations in recent years. Furthermore, USPS has a maintenance backlog for its retail facilities. USPS officials stated that maintenance has historically been underfunded, causing it to focus on “emergency” repairs at the expense of routine maintenance. USPS has limited its capital expenditures to help conserve cash, an action that may affect its ability to make progress on its maintenance backlog. USPS recognizes the need to adjust its retail network to provide optimal service at the lowest possible cost and has expanded its use of alternatives to traditional post offices. In 2009, customers could also access postal services at more than 63,000 physical locations, such as purchasing stamps at drug stores and supermarkets. By fiscal year 2009, nearly 30 percent of retail transactions were conducted in locations other than USPS retail facilities. In addition, self-service options, such as Automated Postal Centers, are located in postal retail facilities. Opportunities to consolidate retail facilities are particularly evident in urban and suburban areas, where USPS retail locations are close to one another, customers have more options, and facilities are expensive to operate and maintain. Some of the key issues in the retail area include whether USPS should retain its current retail network and find sources of revenue to support it other than through the sale of postal products, or whether it should eliminate unnecessary facilities, modernize its retail services, and partner with the private sector to provide services in other locations, such as shopping malls. Another issue is whether USPS should provide other governmental services in postal facilities and, if so, whether it would receive reimbursement. Options in the retail area include the following: Optimize USPS’s retail facility network by expanding retail access and closing unneeded facilities: In March 2010, USPS stated that it plans to expand customer access while reducing costs through new partnerships with retailers and other options, such as self-service kiosks. USPS explained that post offices are often less convenient for customers in terms of hours and accessibility, and cost two to three times more than alternatives. USPS also noted that it has more retail locations than McDonalds, Starbucks, Walgreens, and Walmart combined, but the average post office provides service to about 600 customers weekly—or about 1/10th in comparison to Walgreens. Additional postal retail locations could be located within drug stores, grocery stores, and other retail chain stores, such as those in shopping centers and local malls. These retail stores are often open 7 days a week, for longer hours than postal retail facilities. According to USPS officials, stores that could provide access to postal retail services pay their employees less than postal retail clerks who currently earn an average of over $40 per hour in compensation and benefits. USPS stated that it would reduce redundant retail facilities as customers continue to shift to alternatives, but noted that proposals to close facilities have led to protests and resistance. USPS called for Congress to eliminate the statutory prohibition on closing small post offices solely for operating at a loss, and stated that changes would be needed to the regulatory review process for closing post offices. USPS also called for reduced constraints on the decision-making process for providing access to postal services. If USPS is not able to streamline its retail operations, it may need to make major reductions in the hours that post offices and retail facilities are open for window service. Leverage the USPS retail network: USPS could maintain current retail facilities and leverage this network by providing other nonpostal goods or services. Such activities might be performed by USPS or private-sector partners and other government agencies. For example, these partners and agencies could lease unused space in USPS facilities. Stakeholders suggested many options for diversifying into nonpostal retail areas, which could include selling nonpostal products at postal retail facilities and providing services for other federal, state, or local government agencies. While this option may increase the use of USPS’s retail network, it may raise costs if facility modifications are needed, such as measures to maintain mail security at a facility where other business partners are colocated. Also, some competitors may raise concerns about USPS’s legal advantages. For example, according to a 2007 report to Congress by FTC, USPS is exempt from state and local taxes and fees and some other state and local statutes and regulations. USPS has opportunities to reduce delivery costs, which is its most costly operation. More than 320,000 carriers account for close to one-half of USPS salary and benefit expenses. Because USPS delivers 6 days per week to most of its 150 million addresses, regardless of mail volume, it is difficult to reduce delivery costs commensurate with declining mail volume. In fiscal year 2000, carriers delivered an average of about 5 pieces of mail per day to every address, which fell to about 4 pieces in fiscal year 2009—a decline of 22 percent. This trend is continuing as mail volume declines and the delivery network continues to expand. Over 900,000 delivery points were added in fiscal year 2009—increasing costs by over $190 million, according to an USPS estimate. In addition to the number of delivery points, the efficiency and cost of delivery operations depend on a variety of other factors, including the type of carrier route or the location of the receptacle where mail is delivered. For example, most customers (about 87 percent) receive their mail via one of the three different types of carrier routes identified in table 7. These routes are served by carriers under different compensation systems, which largely account for the differences in their costs. Cost differences also exist related to the location of the mail receptacle (see table 8). We have reported on USPS’s ongoing efforts to increase the efficiency of mail delivery. USPS has begun to install 100 machines for its $1.5 billion Flats Sequencing System to sort flat-sized mail into delivery order. USPS expects this system to eliminate costly manual sorting, thereby improving delivery efficiency, accuracy, consistency, and timeliness. USPS is also realigning city carrier routes to remove excess capacity, which is expected to generate more than $1 billion in annual savings. This effort is expected to result in reduced facility space needs, increased employee satisfaction, and more consistent delivery service. Route realignment has been made possible by collaboration between USPS and the National Association of Letter Carriers and is continuing this fiscal year. In addition, USPS may have additional opportunities to further increase delivery route efficiency, such as by promoting the use of more efficient delivery modes for new delivery points. Options in the delivery area include the following: Decrease delivery frequency from 6 days a week to 5 days a week: USPS favors eliminating Saturday delivery to provide substantial financial savings. According to USPS studies, its savings would be primarily achieved by eliminating work performed by city and rural letter carriers. Additional savings would be realized from reducing the use of delivery vehicles as well as reducing the scope of mail processing activities that support Saturday delivery. However, concerns have been raised about the impact on customers, who may need to wait longer to receive time- sensitive mail or go to USPS retail facilities to pick up mail; senders, who may have to change when they send mail; and USPS, which may lose the competitive advantage of delivering on Saturdays. According to USPS, eliminating Saturday delivery is estimated to result in annual savings of about $3 billion. PRC reported in 2009 that eliminating Saturday delivery would result in estimated annual savings of about $2.2 billion, on the basis of somewhat different assumptions regarding the likely effects on mail volume and costs. For this option to be implemented, Congress would need to exclude statutory restrictions that mandate 6-day delivery from USPS annual appropriations. USPS filed a request on March 30, 2010, for a PRC advisory opinion on its proposal to eliminate Saturday delivery, which would lead to a public proceeding that would include input by interested parties. Allow USPS to determine delivery frequency on the basis of local mail volume: A related option would be to change delivery frequency to match mail volumes to demand, which could change by season as well as by local area. For example, USPS could have less frequent delivery in low-volume summer months than the high-volume holiday season. Some residents already do not receive 6-day delivery, particularly those located in remote or seasonal vacation areas. A consequence of this option could be more frequent delivery to areas with higher mail volume, which could be in higher-income areas, which tend to receive much more mail. However, low-income residents and others, such as the elderly and disabled, may rely more on mail delivery. This option may also be criticized as inconsistent with current statutory requirements. USPS is required by law to provide prompt, reliable, and efficient services to patrons in all areas. It is also required by law to provide a maximum degree of effective and regular postal services to rural areas, communities, and small towns where post offices are not self-sustaining. Expand the use of more cost-efficient modes of delivery for new addresses, including cluster boxes and curbline delivery: USPS has recently estimated that this option could annually save around $2.5 billion by moving certain door deliveries to centralized deliveries. However, USPS officials told us that they and some mailers are concerned that this option would lead to residents picking up their mail less frequently, which could delay remittances and lower the value of advertising mail. It also would affect access to mail, particularly for customers who currently have mailboxes attached to their homes. Further streamlining of USPS’s field structure could help reduce facility and personnel costs. USPS has the authority to review the need for field administrative offices and streamline its field structure. For example, in fiscal year 2009, it closed 1 of its 9 area offices and 6 of its 80 district offices. USPS has many opportunities to generate additional net revenue, particularly from postal products and services; however, as it has noted, results from actions to generate revenue other than rate increases are likely to be limited compared with its expected losses. Aside from rate increases, USPS projects that it can increase profits by $2 billion by fiscal year 2020 through product and service initiatives. For example, according to USPS, it will work to increase direct mail use among small and medium- sized businesses and increase volumes in both First-Class Mail and advertising mail through targeted promotions. USPS also will continue to leverage its “last-mile” network to transport and deliver packages to their final destinations and work to grow other retail services, such as passport services provided by USPS and Post Office box rentals. Key challenges in the area of revenue generation include the following: The short-term results will likely be limited by the economic climate as well as the ongoing diversion to electronic alternatives. The potential for some actions will be limited because they will apply to mail or services that generate only a small fraction of revenues. USPS projects that its revenue will stagnate in the next decade despite further rate increases. Its revenue peaked at $75 billion in fiscal year 2007 but is projected to decline to $66 billion in fiscal year 2010, and to reach $69 billion in fiscal year 2020—growth that is below expected inflation. Rate increases for market-dominant products, such as First-Class Mail and Standard Mail, would address pressing needs for revenue and could be used to better align rates and discounts with the costs, profitability, and price-sensitivity of mail. In the coming decade, rate increases for market- dominant products up to the price cap could raise significant revenues since these products currently generate 88 percent of revenue, while competitive products comprise nearly all other revenue. Some key issues include the following: At what point are rate increases self-defeating, potentially triggering large, permanent declines in mail volume? How does USPS balance increasing rates to generate revenues with the impact on mailers and the long-term effects on volume, revenues, and the broader mailing industry? Would an “exigent” increase in postal rates over the price cap be justified, considering that it is limited by law to extraordinary or exceptional circumstances? Some options include the following: “Exigent” rate increases over the price cap: USPS projects that its annual losses will increase greatly, even if rates for market-dominant products increase by the maximum allowed under the price cap. To improve its financial viability, USPS announced in March 2010 that it would seek “a moderate exigent price increase” for its market-dominant products that would be effective in 2011. An exigent rate increase over the price cap may produce a large short-term revenue boost. However, a very large rate increase could be self-defeating by increasing incentives for mailers to accelerate diversion to electronic alternatives, thereby lowering revenues in the long run and adding to USPS excess capacity. In 2009, USPS cited the potential impact on mail volume and the mailing industry when it ruled out an exigent rate increase for 2010—a year when the inflation-based price cap was zero—and announced that rates would not change for market-dominant products. Rate increases for competitive products: USPS annually increased rates in 2008, 2009, and 2010 for competitive products, including Priority Mail and Express Mail. Major USPS competitors, such as United Parcel Service (UPS) and FedEx, also have a history of annual rate increases. USPS plans to pursue more volume-based rate incentives to stimulate additional mail use and take advantage of its excess capacity. For example, USPS reported that volume-based incentives can stimulate more advertising mail sent for sales, customer acquisition, and customer retention purposes, which should lead to greater mail use in the future. The additional mail volume can take advantage of USPS’s large excess operational capacity. However, results to date suggest that such incentives can increase net income, but they appear to have limited potential compared with USPS losses. For example, a 2009 “summer sale” for Standard Mail that offered lower rates for volumes over mailer-specific thresholds reportedly had little effect on USPS’s overall financial results for the fiscal year. USPS has estimated that about 38 percent of the volume qualifying for reduced “summer sale” rates would have been sent in the absence of the incentive, which reduced the profitability of this initiative. USPS plans to implement a similar initiative for summer 2010. Some mailers have said that USPS should enter into more negotiated service agreements (NSA) with individual business mailers of market- dominant products. NSAs generally specify mutual agreements between USPS and mailers involving the preparation, presentation, acceptance, processing, transportation, and delivery of mailings under particular rate, classification, and service conditions, and restrictions that go beyond those required of other mailers. USPS did not generate net income from its seven NSAs in fiscal years 2007 through 2009 combined. These NSAs generally offered mailers lower rates for volumes that exceeded thresholds and had provisions to reduce some USPS costs, such as not returning undeliverable advertising mail and using electronic communications to provide this information to mailers. In comparison, USPS has negotiated about 100 contracts with business mailers of competitive products. Like NSAs for market-dominant products, contracts for competitive products are generally volume-based. These contracts also have provisions intended to lower USPS’s mail-handling costs. PRC has reported that the contracts it approved in fiscal years 2008 and 2009 are expected to improve USPS’s net revenue. In December 2009, USPS officials told us that after PAEA was enacted, USPS preferred to pursue the volume-based incentive programs for market-dominant products that we have previously described, instead of pursuing NSAs. In theory, NSAs can increase net income by incentives tailored to each mailer’s business needs, mailing practices, and opportunities to reduce USPS costs. In practice, it may be costly and time- consuming to negotiate NSAs and have them reviewed by PRC. The potential profitability of NSAs has been scrutinized in the past and is listed in PAEA as a factor for PRC to consider, along with (1) issues of fair competition, such as the availability of NSAs to similarly situated mailers, and (2) whether NSAs would cause unreasonable harm to the marketplace. These issues relate to the broader issue of whether USPS should have additional pricing flexibility and less PRC review of rates for its market-dominant products. USPS has suggested that regulatory and legal restrictions in this area need to be removed to provide greater flexibility, explaining that NSAs provide mailers with the opportunity to increase volume at a reasonable price. During 2009, USPS considered options for developing new postal products and product enhancements, such as (1) “hybrid” mail that could be created online and printed and sent close to its final destination, which might involve USPS partnerships with private companies, and (2) new, low-cost ways for handling consumer electronics and other items that are being returned for recycling or disposal. As an example of recent product enhancements, USPS introduced new flat-rate boxes for Priority Mail, which it reports has met customer needs and generated volume growth. Consistent with USPS’s stated strategy of providing greater value to its customers, some stakeholders told us that USPS should better understand and meet the needs and revenue growth opportunities of diverse mailers, in part through greater customer focus and improving the value of mail. Competitive products are a promising growth opportunity for USPS, especially packages mailed by businesses to consumers. USPS forecasts that the volume of competitive products will increase 40 percent over the next decade. However, this volume growth is expected to have limited impact on losses, in part because competitive products generate only 12 percent of revenues. USPS is working to increase revenues from competitive products by increasing its market share in the growing package delivery market as well as by delivering more packages of competitors, such as “last-mile” delivery of packages that UPS or FedEx transport close to the destination and provide to USPS for final delivery. A key issue is what the net return would be if USPS pursues a growth strategy requiring costly additional investment to upgrade its automation and tracking capabilities in an area with formidable competitors. USPS may have opportunities to increase volume by reducing mailers’ costs to prepare and enter mail as well as allowing more creative mail use for advertising and communications. However, this option could also risk additional costs to handle mail and provide assurance that discounted mail meets the necessary requirements. Some mailer groups and mailers have criticized USPS requirements that they consider to be impediments to volume and revenue growth. These stakeholders said that these requirements are costly for mailers but only yield marginal benefits for USPS, delay delivery, limit the effectiveness of mail, or are enforced in an overly stringent manner. USPS counters that (1) these requirements are needed to limit its handling costs and ensure that discounted mail meets the necessary requirements and (2) there are limited opportunities for it to increase revenues by simplifying its requirements. Some parties have said that USPS should strike a balance between requirements necessary for its operations and the need to provide mailers with flexible, low-cost methods to prepare and submit mail. USPS and mailers have long engaged in collaborative efforts to help define appropriate requirements. Redoubling efforts in this area could produce important benefits for USPS and the mailing industry. In 2009, USPS asked Congress to change the law so that it can diversify into nonpostal areas to find new opportunities for revenue growth, and some stakeholders have also supported diversification. USPS and stakeholders we collected information from offered many options for diversification into nonpostal areas, either on its own or in partnership with other private firms or government agencies. New nonpostal products and services that were identified include providing banking, financial, and insurance services; selling nonpostal products at its retail facilities; providing services for other federal, state, or local government agencies; carriers delivering nonpostal items or providing contract services (such as meter reading); advertising at USPS facilities; and providing electronic commerce. Diversification could involve entering new areas or earning revenues from business partners who sell nonpostal products at USPS retail facilities. Whether USPS should be allowed to engage in nonpostal activities should be carefully considered, including its poor past performance in this area, as should the risks and fair competition issues. We have previously reported the following: USPS lost nearly $85 million in fiscal years 1995, 1996, and 1997 on 19 new products, including electronic commerce services, electronic money transfers, and a remittance processing business, among others. In 2001, we reported that none of USPS’s electronic commerce initiatives were profitable, and that USPS’s management of these initiatives—such as an electronic bill payment service that was eventually discontinued—was fragmented, with inconsistent implementation and incomplete financial information. In enacting PAEA, Congress restricted USPS from engaging in new nonpostal activities. PAEA also required PRC to review USPS’s existing nonpostal services to determine whether they should be continued or terminated. PRC recently found the intent of this requirement was to concentrate USPS’s focus on its core responsibilities and away from nonpostal services that are not justified by a public need that cannot be met by the private sector. Allowing USPS to diversify into nonpostal activities would raise a number of issues, including whether it should engage in nonpostal areas where there are private-sector providers and, if so, under what terms. Other issues relate to concerns about unfair competition; whether USPS’s mission and role as a government entity with a monopoly should be changed; as well as questions regarding how it would finance its nonpostal activities, what transparency and accountability provisions would apply; whether USPS would be subject to the same regulatory entities and regulations as its competitors; and whether any losses might be borne by postal ratepayers or the taxpayer. USPS reported in March 2010 that even if it could enter nonpostal areas, such as banking or selling consumer goods, its opportunities would be limited by its high operating costs and the relatively light customer traffic of post offices compared with commercial retailers. USPS also stated that the possibility of building a sizable presence in logistics, banking, integrated marketing, and document management is currently not viable because of its net losses, high wage and benefit costs, and limited access to cash to support necessary investment. USPS concluded in its Action Plan that building a sizable business in any of these areas would require “time, resources, new capabilities (often with the support of acquisitions or partnerships) and profound alterations to the postal business model.” Addressing challenges to USPS’s current business model may require restructuring its statutory and regulatory framework to reflect business and consumers changing use of the mail. While we do not address whether USPS’s ownership structure should be modified in this report, many other statutory and regulatory considerations that should help to address the changing use of mail have been discussed and relate to the following elements of USPS’s business model: Mission: What is an appropriate universal service obligation in light of fundamental changes in the use of mail? Role: Should USPS be solely responsible for providing universal postal service, or should that responsibility be shared with the private sector? Monopoly: Does USPS need a monopoly over delivery of certain types of letter mail and access to mail boxes to finance—in part or wholly— universal postal service? Governance and regulation: What is an appropriate balance between managerial flexibility and the oversight and accountability provided by the current governance and the regulatory structure? USPS’s statutory mission is to provide postal services to “bind the nation together through the personal, educational, literary, and business correspondence of the people.” It is required by law to provide prompt, reliable, and efficient services to patrons in all areas and postal services to all communities. These and related requirements are commonly referred to as the universal service obligation. PRC has reported that universal postal service has seven principal attributes (see table 9). Key questions regarding universal postal service include the following: How much postal service does the nation need and how should it be funded? Should the costs of providing universal service be borne by postal ratepayers, or should taxpayers subsidize some unprofitable aspects of universal service that benefit the nation? If USPS cannot be financially viable without reducing universal postal service, what changes would be needed? Who should determine whether changes should be made to universal service (e.g., Congress, USPS, or PRC)? In addition, issues have been raised about whether all postal products should be required to cover their costs, even if they provide social benefits, or receive a subsidy through appropriations. Historically, some types of mail were designed to channel broad public goals, such as furthering the dissemination of information, the distribution of merchandise, and the advancement of nonprofit organizations. For example, Periodicals (mainly, mailed magazines and newspapers) have historically been given favorable rates, consistent with the view that they help bind the nation together, but this class has not covered its costs for the past 13 fiscal years. Losses from Periodicals increased from $74 million in fiscal year 1997 to $438 million in fiscal year 2008 and to $642 million in fiscal year 2009. These escalating losses have provoked growing concern and controversy. Postal stakeholders are currently debating what corrective actions, if any, are warranted, and their possible impact on Periodicals. Other money-losing types of mail with social benefits include the following: Single-piece Parcel Post was introduced in 1913 to provide affordable parcel delivery; this opened up the mail order merchandise market, especially in rural areas. Media Mail, or “book rate,” as it was formerly known, was initially designed in 1938 to provide lower rates for mailed books and encourage the mailing of educational materials. Library Mail was introduced in 1928 as a preferential rate for books sent by or to libraries and was later expanded to schools, colleges, and universities in 1953. According to a Congressional Research Service report, when Congress put USPS on a self-sustaining basis in 1971, it continued to subsidize the mailing costs of such groups as the blind, nonprofit organizations, local newspapers, and publishers of educational material, by providing an appropriation to cover the revenues that were given up, or “forgone,” in charging below-cost rates to these groups. Appropriations for these subsidies mounted as postage rates and the number of nonprofits grew, approaching $1 billion annually in the mid-1980s. Successive administrations sought to cut these costs by reducing eligibility and having other mailers bear more of the burden. Questions continue about how these money-losing types of mail should be funded. All money-losing market-dominant products lost $1.7 billion collectively in fiscal year 2009, up from $1.1 billion in fiscal year 2008 (see table 10). In addition to the $642 million lost from Periodicals in fiscal year 2009, the largest money-losing product was Standard Mail Flats ($616 million). Losses from Standard Mail Flats have nearly tripled over the past fiscal year. In its Annual Compliance Determination report for fiscal year 2009, PRC discussed actions that could be taken to deal with these and other money-losing products. Some of the losses from Standard Mail are due to unprofitable mail sent by nonprofit organizations. By law, rates for nonprofit Standard Mail are 60 percent of the rates for the most closely corresponding type of for-profit Standard Mail. However, nonprofit rates benefit charitable and religious organizations, and Congress has long required preferential rates for nonprofit mail. If Congress were to decide that all market-dominant products should cover their costs, it could also revisit other legal requirements that constrain USPS’s pricing flexibility for these products. First, the price cap requirement may need to be revisited to enable some types of mail to be increased over the cap without resorting to the exigent rate increase process. For example, the average rate increase for the Periodicals class is limited to inflation under the price cap. Similarly, single-piece Parcel Post, Media Mail, and Library Mail are a significant part of the Package Services class that is also covered by the price cap. In addition, USPS could continue to gradually implement a rate structure for Periodicals that is based more on costs, which could involve rate increases for mail that is more costly to handle (e.g., mail provided to USPS in sacks, rather than on pallets). However, such a rate structure could disproportionately affect some small-circulation magazines. Issues regarding which entity should consider and decide on changes to universal service—including Congress, PRC, or USPS—have long been debated. Because many aspects of universal service are required by law, Congress would have to make any changes in these areas. For example, Congress would have to redefine certain aspects of universal postal service that are required under current law, such as 6-day delivery, revised statutory preferences for nonprofit mail, and restrictions on closing small post offices. For some aspects of universal service, such as related pricing issues, PRC has the authority to act by establishing regulations that govern postal pricing and overseeing USPS compliance with legal requirements. USPS has flexibility to act on some other aspects, such as establishing and maintaining service standards for timely mail delivery. Another issue is whether postal services are an inherently governmental function, and whether USPS should be the only entity responsible for universal postal service. The federal government’s responsibility for postal services is detailed in Title 39 of the United States Code. A possible rationale for sharing this responsibility would be to allow private companies to provide postal services, with the idea that competition could give some customers more choices that better meet their needs, through lower cost products and expanded services. A related consideration is that some aspects of postal service, particularly mail delivery, are considered to have economies of scale, meaning that, in theory, one provider might fulfill this function more economically than multiple providers. In practice, multiple providers—including USPS and numerous companies—already deliver mail (e.g., contractors who provide long-distance mail transportation and deliver mail to households located along sparsely populated highway routes). Another question is whether USPS should continue to fulfill other roles, or whether these roles should be discharged by other agencies. For example whether USPS or some other law enforcement body should enforce postal laws was considered in the postal reform debate—specifically, whether the Postal Inspection Service that enforces mail fraud and other statutes should be transferred to another federal law enforcement agency. Another example is USPS’s involvement in responding to national disasters, including hurricanes and terrorist attacks. In this regard, a recent executive order stated that USPS has the capacity for rapid residential delivery of medical countermeasures across all U.S. communities, and that the federal government will use USPS to implement national medical countermeasures in the event of a large-scale biological attack. USPS has two types of monopolies to (1) deliver certain letter mail and (2) have exclusive access to mailboxes. USPS has a monopoly over the delivery of certain letter mail to help ensure that it has sufficient revenues to carry out public service mandates, including universal service. USPS has promulgated regulations to identify exceptions to the postal monopoly. Some key exceptions include “extremely urgent” letters (generally, next-day delivery) and outbound international letters. Most mail volume is covered by this monopoly, regulated as market-dominant mail, and subject to the price cap. Over the years, Congress has reevaluated the need for the mail monopoly, broadening and reducing it at various times, including in PAEA. For over 200 years, USPS and its predecessor, the former U.S. Post Office Department, operated with a statutory mail monopoly, which restricted the private delivery of most letters. Congress created the mail monopoly as a revenue protection measure to help enable the former Post Office Department to fulfill its mission. A rationale for the mail monopoly is to prevent private competitors from engaging in an activity known as cream- skimming, that is, offering service on low-cost routes at prices below those of USPS, while leaving USPS with high-cost routes. Furthermore, allowing private companies to compete for mail now covered by the monopoly could lead to additional declines in mail volume and revenue, thereby increasing excess capacity and reducing USPS’s net income. According to PRC, the most frequent argument against the mail monopoly is that, assuming a legal framework continues to exist to protect public interest and the provision of universal service, competitive markets might produce more efficient, innovative, flexible, and fairer services to buyers and producers. Narrowing or eliminating the monopoly could increase consumer choice and provide incentives for USPS to become more effective and efficient. Critics of the monopoly also cite the experience of foreign countries that have narrowed, eliminated, or are phasing out their monopolies. This restriction prohibits anyone from knowingly and willingly placing mailable matter without postage into any mailbox. As we have reported, the purposes of the restriction, which dates back to 1934, were twofold— to stop the loss of postal revenue resulting largely from private messengers delivering customer bills to mailboxes without paying postage and to decrease the quantity of extraneous matter being placed in mailboxes. PAEA did not change the mailbox monopoly. USPS has stated that continuation of the mailbox monopoly would best preserve customer service, safety, security, and the value of mail. According to USPS, the mailbox monopoly helps deter mail theft and identity theft, facilitates enforcement when violations occur, and is needed for efficient mail collection and delivery. We have previously reported that critics of the mailbox monopoly said it impedes competition and infringes on private property. FTC reported in 2007 that the mailbox monopoly reduces competition and raised competitors’ costs of delivering products that otherwise could fit into a mailbox. While FTC recognized mail security and privacy issues, it concluded that Congress and PRC may want to consider whether relaxing the mailbox monopoly to allow consumers to choose to have private carriers deliver competitive products to their mailboxes would create net benefits. In 2008, PRC stated that it “does not recommend any changes to the mailbox rule,” citing issues with mail security and USPS efficiency. PRC also noted that its public proceeding evidenced broad support for continuing the mailbox monopoly. The effectiveness of USPS’s governance and regulatory structure is critical to its success and to ensuring that quality affordable postal services are provided to the American people. The 2003 President’s Commission noted that managerial accountability must come from the top, with USPS being governed by a strong corporate-style board that holds its officers accountable. The commission concluded that giving USPS greater flexibility would require enhanced oversight by an independent regulatory body endowed with broad authority, adequate resources, and clear direction to protect the public interest and ensure that USPS fulfills its duties. A number of regulatory changes were implemented after PAEA was enacted, and a thorough review of these changes has not been developed. PAEA required PRC to submit a report to Congress by December 2011 concerning “the operation of the amendments made by ” and any recommendations for improvements to the U.S. postal laws. Another PRC report is required by December 2016 to determine whether the system for regulating rates and classes for market-dominant products is achieving its objectives. The Board of Governors directs the exercise of the powers of USPS, directs and controls its expenditures, reviews its practices, and conducts long-range planning. The board sets policy; participates in establishing postage rates; and takes up various matters, such as mail delivery standards and some capital investments and facilities projects. By law, governors are chosen to represent the public interest and cannot be “representatives of specific interests using the Postal Service.” Despite the changes made by PAEA, the qualifications of USPS governors continue to be an issue. Members of the Board of Governors told us that the board lacks sufficient business and financial expertise. The members also suggested that some governors should not be politically appointed. In this regard, the 2003 President’s Commission recommended that the Board of Governors be comprised of 12 individuals: 3 presidential appointees, 8 independent members selected by the 3 appointees with the concurrence of the Secretary of the Treasury, and the Postmaster General (who would be selected by the other 11 members). Should any of the operational or structural options outlined in this report be implemented, Congress, USPS, the Board of Governors, PRC, and other relevant postal stakeholders could consider whether governance and regulatory structures need to be changed to reflect an appropriate balance in the oversight roles of these entities. PAEA gave USPS more pricing and product flexibility, which was balanced by strengthening PRC’s oversight authority. Among other things, PAEA required PRC to develop the regulatory structure for postal rates, consult with USPS on establishing delivery service standards, and annually determine USPS’s compliance with applicable laws. Also under PAEA, PRC was granted the authority to issue subpoenas; direct USPS to adjust rates not in compliance with applicable postal laws; or, in cases of deliberate noncompliance with applicable postal laws, levy fines. Action by Congress and USPS is urgently needed on a number of difficult issues to facilitate progress toward USPS’s financial viability by reducing costs, increasing efficiency, and generating revenues. The significant deterioration in USPS’s financial condition over the past 2 years, its increasing debt, and the grim forecast for declining volume over the next decade led GAO to add USPS’s financial condition to its high-risk list in July 2009. We suggested that USPS develop and implement a broad restructuring plan, with input from PRC and other stakeholders, to identify specific actions planned, key issues, and steps Congress and other stakeholders need to take. On March 2, 2010, USPS issued its Action Plan, which identified seven key areas wherein it would need legislative changes or support. Many of the options discussed are also options we have analyzed and included in this report for consideration. USPS forecasts of mail volume, revenue, and net income over the next decade quantify the magnitude of the challenges that it faces from continued volume decline to about 150 billion pieces in fiscal year 2020—about the same as the volume level in fiscal year 1986—and a projected cumulative $238 billion shortfall if no additional efficiency or revenue initiatives are undertaken. USPS’s Action Plan indicates that actions within its control can close $123 billion of this financial gap, but that actions outside its existing authority— including some involving statutory changes—would be needed to eliminate the remaining financial gap. Action on these issues will likely take several years to fully implement once a decision is made on the scope of needed changes. Therefore, agreement on next steps is urgently needed. If USPS is to continue being self-financing, Congress, USPS, and other stakeholders will need to reach agreement on major issues that impede its ability to implement actions to reduce losses. These issues include funding postal retiree health benefits; reexamining binding arbitration; realigning services, operations, networks, and workforce to reflect declining volume; and changing use of the mail in a dynamic marketplace as well as generating revenue. Funding postal retiree health benefits: USPS has said that it cannot afford its required prefunding payments on the basis of its significant volume and revenue declines, incurring large losses, nearing its debt limit, and limited cost-cutting opportunities under its current authority. Several proposals have been made to defer costs by revising the statutory requirements, and it is important that USPS fund its retiree health benefit obligations— including prefunding these obligations—to the maximum extent that its finances permit. In addition to considering what is affordable and a fair balance of payments between current and future ratepayers, Congress would also have to address the impact of these proposals on the federal budget. CBO has raised concerns about how aggressive cost-cutting measures would be if prefunding payments for retiree health care were reduced. This concern further indicates the need for broad agreement on specific realignment actions, the time frame for implementation, and the expected financial impact. Binding arbitration: One of the most difficult challenges USPS faces is making changes to its compensation systems, which will be critical to its financial condition since wages and benefits comprise 80 percent of its costs. In this regard, the time has come to reexamine the structure for collective bargaining that was developed 40 years ago. Since that time, the competitive environment has changed dramatically and rising personnel costs are contributing to escalating losses. Thus, it is imperative to ensure that USPS’s financial condition be considered in upcoming collective bargaining if the process reaches binding arbitration. Realigning postal services with changing use of the mail: As mail use by businesses and consumers continues to change, USPS has stated that it cannot afford to provide the same level of services and that changes are needed. USPS has estimated that it could reduce costs by about $3 billion annually if it could reduce delivery frequency from 6 days to 5 days, but congressional agreement would be needed to not include a 6-day delivery requirement in USPS annual appropriations. USPS filed a request on March 30, 2010, for a PRC advisory opinion on its proposal to eliminate Saturday delivery. Generating revenue through new or enhanced product and services: On the revenue side, a key issue is whether USPS can make sufficient progress using the pricing and product flexibility provided in PAEA or whether changes may be needed. The Action Plan stated that USPS needs additional authority to adjust its pricing to better reflect market dynamics and proposed some changes. These proposals have not been fully analyzed, nor have PRC and stakeholders had an opportunity to provide input. Thus, it is unclear what statutory or regulatory changes should be made at this time. Another key issue is whether USPS should be allowed to engage in new nonpostal areas that may compete with private firms. Congress considered many of the public policy issues in this area related to fair competition prior to PAEA’s enactment in 2006 and decided at that time not to let USPS engage in new nonpostal areas. It is not clear what specific actions USPS would like to take, their expected profitability, or how they might affect other businesses. USPS’s current financial condition may limit its expansion into other areas in the short term, but ultimately its plans in this area could affect its operations. Realigning operations, networks, and workforce: Once Congress and USPS have determined what, if any, changes should be made in the products and services that it provides, corresponding changes will be needed in postal operations, networks, and workforce. This area involves some public policy issues that Congress may want to address. USPS will need to address detailed operational issues related to increasing cost- efficiency. Some of the difficult tradeoffs in this area include USPS’s need to significantly reduce its size to remain self-financing and keep prices affordable, versus concerns about whether such reductions could harm the value of its brand, its network of physical assets, and the social benefits that it provides as well as the effects of these actions on its workforce. USPS has made limited progress in optimizing its networks over the last decade, particularly in facilities that include public access to retail operations. For example, in July 2009, USPS initiated a PRC review of over 3,600 retail stations and branches located primarily in urban and suburban areas for possible consolidation or discontinuance, but fewer than 200 facilities remain under consideration for such actions. PRC issued its advisory opinion on USPS’s proposed retail consolidations in early March, which affirmed USPS’s authority to adjust its retail network while recommending several process improvements. Considering the numerous statutory and regulatory requirements in this area, it could be difficult to make rapid changes to rightsize its network of 36,500 retail facilities. USPS’s Action Plan says that it plans to expand access to retail service and, as customers shift to these new services, that it will reduce redundant retail facilities. However, it is unclear what specific changes would be made, how long it would take to make these changes, and how much annual cost savings could be achieved. USPS’s Action Plan also does not address possible closures of large mail processing facilities to reduce the excess capacity in its mail processing network. A new approach is urgently needed to make the necessary progress in realigning postal operations and networks as USPS’s core business continues to decline. Conducting business as usual is unlikely to produce significant results, particularly in the rapid time frame that would be required to avert massive losses. Thus, it will be important for Congress, USPS, and other stakeholders to reach agreement on the package of actions that should be taken, the desired operational and financial results, and the time frames for implementation. Key questions that need to be addressed include the following: Universal service issues: What, if any, changes are needed—that is, should delivery services be changed (e.g., frequency or standards), and should USPS continue moving retail services out of post offices to alternative locations? New products and services: What opportunities are there to introduce profitable new postal products and enhancements to existing ones? Should USPS engage in nonpostal areas where there are private-sector providers? If so, under what terms? Realigning operations, networks, and workforce: How should USPS optimize its operations, networks, and workforce to support changes in services; how quickly can this happen; and how can it work with its employees and customers to minimize potential disruption? This is an area where Congress may want to consider an approach similar to that used by the Department of Defense’s Base Realignment and Closure (BRAC) Commission, which was established to realign military installations within the United States. Under the Defense Base Closure and Realignment Act of 1990, the President can either accept or reject BRAC recommendations in their entirety. If rejected, the BRAC Commission could give the President a revised list of recommendations. If the President accepts the list of recommendations, it is forwarded to Congress and the list becomes final, unless Congress enacts a joint resolution. Our report on the 2005 BRAC round noted that the Department of Defense viewed this BRAC as a unique opportunity to reshape its installations and realign its forces to meet its needs for the next 20 years. Congress has previously turned to panels of independent experts to assist in restructuring organizations that are facing key financial challenges. These panels have gained consensus and developed proposed legislative or other changes to address difficult public policy issues. For example, the District of Columbia Financial Responsibility and Management Assistance Authority was established to, among other things, (1) eliminate budget deficits and cash shortages of the District through financial planning, sound budgeting, accurate revenue forecasts, and careful spending; (2) ensure the most efficient and effective delivery of services, including public safety services, by the District during a period of fiscal emergency; and (3) conduct necessary investigations and studies. This organization was suspended in 2001 once relevant legal provisions were met, including achieving a balanced budget for a 4th consecutive year. Establishing a similar commission or control board of independent experts could provide a mechanism to assist Congress in making timely decisions and comprehensive changes to USPS’s business model and operations. A commission of experts may be more appropriate to facilitate the changes needed to achieve financial viability while also considering stakeholder interests. The following questions could assist Congress in developing such a commission: What criteria should be used to select commission members, for example, logistics experience, business restructuring, or labor management expertise? How could the commission best ensure that diverse stakeholder interests are appropriately considered? What would be the time frame of the commission? What goals or objectives should guide the commission—for example, ensuring USPS’s financial viability, and recommending policy and management changes? USPS faces daunting financial losses that it projects could total over $238 billion through fiscal year 2020, unless it can substantially reduce its costs, including the size of its operations, networks, and workforce to reflect declining mail volume, and to generate new revenues. USPS’s planned actions under its existing authority will not be enough to make it financially viable. Therefore, Congress, USPS, and other stakeholders need to reach agreement on a package of actions to take so that USPS can become financially viable. This agreement will need to address difficult constraints and legal restrictions that continue to hamper progress. Such an agreement is urgently needed so that Congress and stakeholders have confidence that the actions USPS takes will be fair to all stakeholders. Then USPS could begin to plan and make the necessary changes, some of which may require several years to fully implement and realize potential cost savings. For example, restructuring operations and networks would require coordinated actions involving postal employees, mailers, and the public. To reach agreement on these difficult issues, Congress could engage a panel of independent experts to develop a credible and comprehensive package of specific proposals, including the following: Potential changes related to adapting universal postal services to the declining use of mail, such as removing the statutory requirements for 6-day delivery and restrictions on closing post offices. Changes needed to realign USPS operations, networks, and workforce with its declining workload, and how to address employee and community concerns and resistance to facility closures. Improving opportunities to generate revenues, and whether that should include allowing USPS to engage in new nonpostal areas. Due to the urgency of USPS’s deteriorated financial condition and outlook, and the fact that it is rapidly approaching its statutory debt limit, Congress may need to provide financial relief, for example, by revising the funding schedule for retiree health benefits. Another action that Congress could take in the near term, which would have a longer-term impact, would be to modify the collective bargaining process to ensure that any binding arbitration would take USPS’s financial condition into account. Furthermore, Congress may want assurance through regular reports that any financial relief it provides is met with aggressive actions to reduce costs and increase revenues, and that progress is being made toward addressing its financial problems. Ultimately, Congress may want to consider changing USPS’s ownership structure, but the resolution of these more pressing issues might afford a better understanding of whether the ownership structure should be modified. As communications and the use of the mail evolve, Congress will need to revisit policy issues related to USPS, the services it provides, and how to best position the organization for the future. The current crisis presents the opportunity to act and position this important American institution for the future. If no action is taken, the risk of USPS’s insolvency and the need for a bailout by taxpayers and the U.S. Treasury increases. To address USPS’s financial viability in the short term, Congress should consider providing financial relief to USPS, including modifying its retiree health benefit cost structure in a fiscally responsible manner. Congress should also consider any and all options available to reduce USPS costs, including revising the statutory framework for collective bargaining to ensure that binding arbitration takes its financial condition into account. At the same time, to facilitate making progress in difficult areas, Congress should consider establishing (1) a panel of independent experts, similar to a BRAC-like commission, to coordinate with USPS and stakeholders to develop a package of proposed legislative and operational changes needed to reduce costs and address challenges to USPS’s business model and (2) procedures for the review and approval of these proposals by the President and Congress. These proposals could focus on adapting delivery and retail services to declining mail volumes; making postal operations, networks, and workforce more cost-efficient; and generating new revenue. Congress also should consider requiring USPS to provide regular reports to Congress to ensure that USPS is making progress to improve its financial condition. These reports could include the actions taken to reduce costs and increase revenues, the results of these actions, and progress toward addressing financial problems. USPS provided written comments on a draft of this report by a letter dated April 2, 2010. These comments are summarized below and included in their entirety in appendix II of this report. In separate correspondence, USPS also provided technical comments, which we incorporated as appropriate. USPS stated that it agreed with many key points in our report and with all but one of our matters for congressional consideration. First, regarding revising USPS retiree health benefit funding, USPS said the prefunding requirement urgently needs to be restructured and agreed that it should continue to fund its retiree health benefits obligation to the maximum extent that its finances permit. Second, USPS agreed that Congress should consider revising the statutory framework for USPS collective bargaining to ensure that binding arbitration takes its financial condition into account. Third, USPS agreed that Congress should consider requiring USPS to provide regular reports to ensure that it is making progress to improve its financial condition. However, USPS raised concerns about using a panel of independent experts to develop a package of proposed legislative and other changes, stating that doing so would add a layer of bureaucracy and delay to problems that require immediate attention. We believe that unless Congress and USPS agree on actions to be taken, USPS will not be able to reduce costs enough to close the revenue gap and achieve financial stability. Congress has used such panels to successfully reach agreement regarding other difficult restructuring issues. We are sending copies of this report to the appropriate congressional committees, the Postmaster General, the Chairman of the USPS Board of Governors, the Chairman of the Postal Regulatory Commission, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The Postal Accountability and Enhancement Act (PAEA) of 2006 required us to report on strategies and options for the long-term structural and operational reform of the United States Postal Service (USPS). Because of USPS’s financial crisis and our assessment that restructuring is urgently needed, our work has been accelerated at the request of Members of Congress and is presented in this report. The objectives of this report are to assess (1) the viability of USPS’s business model, (2) strategies and options to address challenges to USPS’s current business model, and (3) actions Congress and USPS need to take to facilitate progress toward USPS’s financial viability. To assess the viability of USPS’s business model, we relied on our past work, including putting USPS’s financial condition on GAO’s high-risk list in July 2009, and on our testimonies regarding its deteriorating financial condition. We interviewed multiple USPS officials, including the Postmaster General, the Deputy Postmaster General, the former and current Chairman of the Board of Governors, and headquarters and field staff during visits to post offices, mail processing facilities, and other facilities that serve urban and rural areas. We reviewed USPS financial and operating information, including its Annual Reports, Integrated Financial Plans, and Comprehensive Statements; other strategic documents, including its transformation plans, Assessment of U.S. Postal Service Future Business Model, action plan released March 2010— entitled Ensuring a Viable Postal Service for America: An Action Plan for the Future (Action Plan)—and the Action Plan’s financial and volume projections; and collective bargaining agreements. We reviewed USPS’s current legal and regulatory framework and relevant congressional testimonies and hearings. We also reviewed the results of retiree health valuations provided to us by the Office of Personnel Management (OPM) in March 2010. OPM’s valuations, which include estimates of future obligations, costs, premium payments, and fund balances, were based on USPS employee population projections. We did not assess the reasonableness of USPS’s population projections or OPM’s actuarial assumptions and methodology. We utilized OPM’s valuation results to analyze the financial impacts of selected options for funding USPS’s retiree health benefit obligations. We did not assess the validity of USPS’s financial and mail volume projections due to time and resource constraints. Also, we examined reports issued by other postal stakeholders, including the Postal Regulatory Commission (PRC) (particularly its 2008 report on Universal Postal Service and the Postal Monopoly), USPS Office of Inspector General, Congressional Research Service, Congressional Budget Office, the 2003 President’s Commission on the United States Postal Service, and other mailing industry experts. We also met with PRC commissioners and various staff members; representatives of the four major employee unions and three major management associations (the American Postal Workers Union, National Association of Letter Carriers, National Postal Mail Handlers Union, National Rural Letter Carriers’ Association, National Association of Postmasters of the United States, National League of Postmasters, and National Association of Postal Supervisors); USPS Office of Inspector General; Military Postal Service Agency; members of the mailing industry; other postal stakeholders; and economists. To identify options to address the challenges in the current business model, we reviewed information from many of the sources that we have previously mentioned, including (1) past GAO work, (2) relevant congressional hearings and testimonies, (3) stakeholder studies, and (4) interviews with stakeholders. We then supplemented this information by distributing a list of questions to over 60 organizations to gather their opinions on actions that could be taken to improve USPS’s business model and the potential impacts of these actions. Organizations were selected on the basis of a variety of factors, including those who have testified before Congress on postal issues; submitted comments (1) during the public comment solicitations as part of the work of the 2003 President’s Commission on the United States Postal Service, (2) to PRC on universal service, the postal monopoly, and the new regulatory structure for ratemaking, and (3) to the Federal Trade Commission on differences in the legal status between USPS and its competitors; and have been active participants in various USPS-related activities, including participation in the Mailers’ Technical Advisory Committee (a joint USPS-industry workgroup). We also considered the nature of the organization and selected organizations that represented various sections of the postal community, including unions, management associations, private printing and mailing companies, and mailers across various mail segments (e.g., large and smaller mailers, First-Class Mail, Standard Mail, Periodicals, parcels, newspapers, and nonprofit mail). We received responses from 24 mailing associations, 15 private companies, and 4 postal unions and management associations, which is a response rate of about 70 percent. We then gathered and evaluated relevant options on the basis of a variety of criteria, including their potential to reduce USPS costs, realign its operations, and increase revenues, in light of its current and projected financial condition. Some options are consistent with actions we have discussed in our past work—such as optimizing USPS’s retail, delivery, and mail processing networks—while others have been discussed in congressional hearings, regulatory proceedings, and major studies. Other options, some of which would require significant changes to USPS’s legal framework or to current collective bargaining agreements, were selected because they would provide useful context into the key restructuring issues that we have previously described in this report. We did not include every option that we had identified in this report; rather, we present a select listing of options that were based on these criteria. We analyzed each option on the previously mentioned criteria; reviewed available cost and revenue data; and considered potential impacts on various stakeholders, including USPS, employees, mailers, and the public. For reporting purposes, we grouped options according to these following strategies to align costs with revenues reducing compensation and benefits costs; reducing other operations and network costs and improving efficiency; and generating revenues through product and pricing flexibility. Our assessment of certain options related to USPS’s business model, such as in the governance and regulatory areas, was also limited because it is still too soon to see the full impact of the changes from PAEA. Furthermore, we did not address whether USPS’s ownership structure should be altered at this time, but focused instead on the more pressing issues discussed throughout the report. The resolution of these operational issues may afford a clearer understanding of whether USPS’s ownership structure should be modified. We also plan to address the experiences of foreign postal administrations in a separate report. The previously mentioned analysis that we performed was also used as a basis to determine actions that Congress and USPS need to take facilitate progress toward USPS’s financial viability. We supplemented this analysis with other GAO work on independent commissions and control boards, including the Department of Defense’s Base Realignment and Closure Commission, and the District of Columbia Financial Responsibility and Management Assistance Authority. We conducted this performance audit from August 2009 to April 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings based on our audit objectives. In addition to the individual named above, Shirley Abel, Amy Abramowitz, Teresa Anderson, Joseph Applebaum, Gerald Barnes, Joshua Bartzen, William Dougherty, Patrick Dudley, Brandon Haller, Carol Henn, Paul Hobart, Kenneth John, Anar Ladhani, Hannah Laufe, Scott McNulty, Daniel Paepke, Susan Ragland, Amy Rosewarne, Travis Thomson, Jack Wang, and Crystal Wesco made key contributions to this report. U.S. Postal Service: Financial Crisis Demands Aggressive Action. GAO-10-538T. Washington, D.C.: March 18, 2010. U.S. Postal Service: The Program for Reassessing Work Provided to Injured Employees Is Under Way, but Actions Are Needed to Improve Program Management. GAO-10-78. Washington, D.C.: December 14, 2009. U.S. Postal Service: Financial Challenges Continue, with Relatively Limited Results from Recent Revenue-Generation Efforts. GAO-10-191T. Washington, D.C.: November 5, 2009. U.S. Postal Service: Restructuring Urgently Needed to Achieve Financial Viability. GAO-09-958T. Washington, D.C.: August 6, 2009. U.S. Postal Service: Broad Restructuring Needed to Address Deteriorating Finances. GAO-09-790T. Washington, D.C.: July 30, 2009. High-Risk Series: Restructuring the U.S. Postal Service to Achieve Sustainable Financial Viability. GAO-09-937SP. Washington, D.C.: July 28, 2009. U.S. Postal Service: Mail Delivery Efficiency Has Improved, but Additional Actions Needed to Achieve Further Gains. GAO-09-696. Washington, D.C.: July 15, 2009. U.S. Postal Service: Network Rightsizing Needed to Help Keep USPS Financially Viable. GAO-09-674T. Washington, D.C.: May 20, 2009. U.S. Postal Service: Escalating Financial Problems Require Major Cost Reductions to Limit Losses. GAO-09-475T. Washington, D.C.: March 25, 2009. U.S. Postal Service: Deteriorating Postal Finances Require Aggressive Actions to Reduce Costs. GAO-09-332T. Washington, D.C.: January 28, 2009. U.S. Postal Service: USPS Has Taken Steps to Strengthen Network Realignment Planning and Accountability and Improve Communication. GAO-08-1022T. Washington, D.C.: July 24, 2008. U.S. Postal Service: Data Needed to Assess the Effectiveness of Outsourcing. GAO-08-787. Washington, D.C.: July 24, 2008. U.S. Postal Service Facilities: Improvements in Data Would Strengthen Maintenance and Alignment of Access to Retail Service. GAO-08-41. Washington, D.C.: December 10, 2007. U.S. Postal Service: Mail Processing Realignment Efforts Under Way Need Better Integration and Explanation. GAO-07-717. Washington, D.C.: June 21, 2007. U.S. Postal Service: The Service’s Strategy for Realigning Its Mail Processing Infrastructure Lacks Clarity, Criteria, and Accountability. GAO-05-261. Washington, D.C.: April 8, 2005.
|
The Postal Accountability and Enhancement Act of 2006 required GAO to evaluate strategies and options for reforms of the United States Postal Service (USPS). USPS's business model is to fulfill its mission through self-supporting, businesslike operations; however, USPS has experienced increasing difficulties. Due to volume declines, losses, a cash shortage, and rising debt, GAO added USPS's financial condition to its high-risk list in July 2009. GAO's objectives were to assess (1) the viability of USPS's business model, (2) strategies and options to address challenges to its business model, and (3) actions Congress and USPS need to take to facilitate progress toward financial viability. GAO primarily drew on its past work; other studies; USPS data; interviews with USPS, unions, management associations, Postal Regulatory Commission, and mailing industry officials; and stakeholder input. USPS's business model is not viable due to USPS's inability to reduce costs sufficiently in response to continuing mail volume and revenue declines. Mail volume declined 36 billion pieces (17 percent) over the last 3 fiscal years (2007 through 2009) with the recession accelerating shifts to electronic communications and payments. USPS lost nearly $12 billion over this period, despite achieving billions in cost savings by reducing its career workforce by over 84,000 employees, reducing capital investments, and raising rates. However, USPS had difficulty in eliminating costly excess capacity, and its revenue initiatives have had limited results. USPS also is nearing its $15 billion borrowing limit with the U.S. Treasury and has unfunded pension and retiree health obligations and other liabilities of about $90 billion. In 2009, Congress reduced USPS's retiree health benefit payment by $4 billion to address a looming cash shortfall, but USPS still recorded a loss of $3.8 billion. Given its financial problems and outlook, USPS cannot support its current level of service and operations. USPS projects that volume will decline by about 27 billion pieces over the next decade, while revenues will stagnate; costs will rise; and, without major changes, cumulative losses could exceed $238 billion. This report groups strategies and options that can be taken to address challenges in USPS's business model by better aligning costs with revenues (see table on next page). USPS may be able to improve its financial viability if it takes more aggressive action to reduce costs, particularly compensation and benefit costs that comprise 80 percent of its total costs, as well as increasing revenues within its current authority. However, it is unlikely that such changes would fully resolve USPS's financial problems, unless Congress also takes actions to address constraints and legal restrictions. Action by Congress and USPS is urgently needed to (1) reach agreement on actions to achieve USPS's financial viability, (2) provide financial relief through deferral of costs by revising USPS retiree health benefit funding while continuing to fund these benefits over time to the extent that USPS's finances permit, and (3) require that any binding arbitration resulting from collective bargaining would take USPS's financial condition into account. Congress may also want assurance that any financial relief it provides is met with aggressive actions by USPS to reduce its costs and increase revenues, and that USPS is making progress toward addressing its financial problems. USPS's new business plan recognizes immediate actions are needed, but USPS has made limited progress on some options, such as closing facilities. If no action is taken, risks of larger USPS losses, rate increases, and taxpayer subsidies will increase. To facilitate progress in these difficult areas, Congress could set up a mechanism, such as one similar to the military Base Realignment and Closure Commission, where independent experts could recommend a package of actions with time frames. Key issues also need to be addressed related to what changes, if any, should be made to delivery or retail services; to allow USPS to provide new products or services in nonpostal areas; and to realign USPS operations, networks, and workforce.
|
The District of Columbia Court Reform and Criminal Procedure Act of 1970 established the D.C. courts in their present form. The courts consist of the D.C. Superior Court, the D.C. Court of Appeals, and the D.C. Court System. Judges of the D.C. courts are appointed by the President and are subject to confirmation by the Senate. The D.C. Superior Court has general jurisdiction over virtually all local legal matters. It consists of 59 active full-time judges, several senior judges who work part-time, and 15 hearing commissioners who exercise limited judicial functions. The D.C. Superior Court has several divisions that process and dispose of cases, including divisions for civil, criminal, probate, and family cases. The court also has divisions that do not process cases but provide alternative dispute resolution services and handle the juvenile probation function. There are also divisions that perform support functions for the court, such as the personnel division. The D.C. Court of Appeals is the highest court in the District of Columbia. It has nine active full-time judges, and several senior judges serving part- time, who usually sit in three-judge panels. Appeals from the D.C. Court of Appeals are taken to the U.S. Supreme Court. The D.C. Court System does not process cases but provides administrative services to the Superior Court and Court of Appeals, including fiscal services, education and training, data processing, personnel management, and court reporting. A Joint Committee on Judicial Administration governs the D.C. Courts. The Chief Judges of the D.C. Superior Court and D.C. Court of Appeals (both designated by the D.C. Judicial Nominations Commission from among the active judges for a 4-year term) serve on this committee, with the Chief Judge of the Court of Appeals serving as committee chair. In addition, another Court of Appeals judge elected by the Court of Appeals judges, and two Superior Court judges elected by their colleagues, serve on the Joint Committee. The Joint Committee appoints an executive officer who serves at the pleasure of the Joint Committee. The executive officer is responsible for the administration of the courts and can appoint and remove, with the consent of the Joint Committee, all D.C. court personnel (including the Clerks of the Superior Court and Court of Appeals) except for the judges’ law clerks and secretaries and the D.C. Register of Wills. Until fiscal year 1998, the D.C. courts’ budget was submitted by the Joint Committee on Judicial Administration, through the D.C. Mayor and Council, to the President and Congress. The budget was forwarded by the Mayor and Council without revision but subject to recommendations. The D.C. Revitalization Act of 1997 (P.L. 105-33) changed this process so that the Joint Committee now submits its budget directly to the Office of Management and Budget, and the Courts’ estimates are included in the President’s budget submission to Congress, without revision but subject to the President’s recommendations. Our objectives were to provide information on staffing and workload levels for the D.C. courts from 1989 through 1998, assess how the D.C. courts evaluate the sufficiency of their nonjudicial case processing staff levels, and compare the D.C. courts’ methodologies to other available methodologies. For the purpose of this review, we defined staff as personnel who perform case processing and disposition functions for the D.C. Superior Court and the D.C. Court of Appeals, such as clerks, bailiffs, court reporters, administrators, and so on. This definition does not include judges or their law clerks and secretaries. To achieve the first objective, we obtained from the courts copies of their annual reports from 1989 through 1998, which contain workload data. We obtained staffing level data for 1989 through 1998 from the Executive Office of the D.C. Courts. We did not independently verify data obtained from the courts. To achieve the second objective, we obtained relevant reports and documents from the D.C. courts and interviewed the clerks of the Superior Court and the Court of Appeals and the acting personnel director for the D.C. courts. We subsequently sent a letter to the Chief Judges of the Superior Court and Court of Appeals, asking for a statement of how staff levels were determined and for a statement of the courts’ position concerning the possibility of a databased study of staffing levels. We received separate replies from the Chief Judges of the Superior Court and Court of Appeals, the contents of which are discussed in this report. We also surveyed a representative sample of D.C. court employees in February 1999 on their perceptions of personnel management in the D.C. courts; several of the questions in the survey referred to staffing. To achieve the third objective, we held discussions with officials of AOUSC and reviewed documents provided by these officials. We also discussed state court staffing with officials of NCSC, which is a clearinghouse for state court information and which provides consulting, conference, and educational services to the state courts. We obtained documents from NCSC concerning its methodology for state court staffing reviews and information on actual reviews done in several states. We did our work from January through June 1999 in accordance with generally accepted government auditing standards. We requested comments from the Joint Committee on Judicial Administration of the D.C. courts. The comments are discussed near the end of this letter and reprinted in appendix I. As shown in table 1, staffing levels in the D.C. courts, excluding judges and their law clerks and secretaries, as measured by full-time equivalents (FTE), were 5.7 percent lower in fiscal year 1998 than in fiscal year 1989. There was about a 10 percent increase in FTE levels for 1990 compared with 1989. FTE levels fluctuated between 1,231 and 1,187 during the period from fiscal year 1990 through fiscal year 1997. There was a decrease of about 11 percent in FTE levels in fiscal year 1998. FTEs in the Superior Court increased by approximately 10 percent between fiscal years 1989 and 1990; declined, with some fluctuations, by about 6 percent through fiscal year 1997; then decreased by about 14 percent in fiscal year 1998. According to a court official, most of the increase in FTEs in 1990 was due to staff associated with eight additional judgeships that were filled in that year. Much of the 1998 decrease in FTEs was attributed to the removal from the courts of the responsibility for adult probation by the D.C. Revitalization Act of 1997. The FTEs for the Court of Appeals increased by approximately 21 percent between fiscal years 1989 and 1990 and then remained relatively constant thereafter. The stability in FTEs after 1990 was associated with the institution of a case management system in 1990 that was aimed at enhancing the efficiency of processing appeals. The Court System, although not directly involved in case processing or disposition, was the only part of the courts to show FTE increases during almost all of this period, with the fiscal year 1998 FTE level 50.6 percent above that of 1989. The rise in staffing levels in the Court System, according to the Executive Officer of the D.C. courts, was due to an increase from 51 to 59 Superior Court judges in 1990, the assumption by the courts (from the D.C. Department of Administrative Services) of responsibility for janitorial services in court buildings in 1993, and the assumption by the courts (from the D.C. Department of Public Works) of responsibility for all maintenance of court buildings in 1996. Table 2 shows the workload of the Superior Court during the calendar year period from 1989 through 1998. Cases available for disposition in the Superior Court decreased 2.8 percent during this period, while cases pending increased 36.5 percent. The overall workload statistics shown in table 2 are combinations of different types of cases, and the mix of case types in the workload can vary over time. For example, of cases filed in 1989, 13.0 percent were felony cases, and 15.9 percent were misdemeanor or traffic cases. In 1998, 5.8 percent of filings were felony cases, and 19.9 percent were misdemeanor or traffic cases. Court officials pointed out several factors that have the potential of affecting their pending caseload, in addition to changes in the mix of cases over time. For example, the use of therapeutic case processing alternatives, such as those used in drug court or with domestic violence cases, may extend the life of a case on a pending caseload while they also result in the rehabilitation of the offender. The caseload of the D.C. Court of Appeals significantly increased during this same period, as shown in table 3. The total number of Court of Appeals cases available for disposition in 1998 was 18.3 percent greater than in 1989, and the number of pending cases at year-end was 18.0 percent greater. In 1989, 42.3 percent of the court’s new filings were criminal cases, and 32.1 percent were civil cases; of 1998 filings, 39.4 percent were criminal, and 23.2 percent were civil. In both years, the balance of the cases fell into a number of categories, including family or agency proceedings. Overall appeal filings were 28 percent higher in 1998 than in 1989. In three reports issued from 1980 through 1990, we recommended to certain federal agencies increased emphasis on workforce planning and set forth the basic elements of such planning. Among these elements was that of identifying the number of employees needed to accomplish agency goals. We specifically identified collecting and analyzing data on staff time required to fulfill agency goals as an important element of workforce planning. We asked D.C. court officials how they evaluate staffing needs. Officials of the D.C. courts said that workload data are used to assess the staffing needs of the courts. The Chief Judge of the Superior Court said that he monitors the court’s case inventory and case processing and relies on actual observation of service delivery and review of customer complaints and compliments. According to the chief judge, if customers are being “courteously, fairly, accurately, and expeditiously serviced,” the court’s workforce is considered appropriate for its needs. The chief judge gave several examples of such assessments. One example referred to the Felony Branch of the Criminal Division, for which filings and dispositions had declined from 1994 through 1998, and cases pending and the backlog had decreased in the same period. According to the chief judge, this implied that case processing is within acceptable limits and that therefore the branch’s workforce is appropriate. The chief judge also noted, however, that other branches’ backlogs have increased, indicating that their staffing was not sufficient. He cited as an example the Misdemeanor and Traffic Branch of the Criminal Division, for which the backlog increased substantially, from 1996 through 1998, although case filings and dispositions went up and down in volume from year to year. The Clerk of the Superior Court told us that decisions on changes in staffing from the previous year are made separately for each division of the court. If a division director indicates more staff is needed, the clerk will try to meet those needs by moving members of the workforce around between working units while maintaining the same overall workforce numbers. The clerk said that he will usually approve new hiring only for a new function or program. The clerk is to send staffing recommendations to the executive officer and the Joint Committee on Judicial Administration for inclusion in the budget. Similar to the Chief Judge of the Superior Court, the Chief Judge of the Court of Appeals also said that the court uses indicators such as case filings, number and types of dispositions, cases pending, time involved in various stages of the process, and general mix of cases. With these data, according to the chief judge, the clerk and staff of the court identify staffing needs and possible management efficiencies. The Clerk of the Court of Appeals said that, as with the Superior Court, staffing decisions are made on a functional basis for each division of the court, rather than for the court as a whole. We also obtained the perspective of D.C. court employees concerning court staffing. As part of our overall review of personnel practices in the D.C. courts, we mailed out a questionnaire in February 1999 to a random, representative sample of court employees to get their views on the courts’ personnel practices. More than 70 percent of those employees in the sample who were working for the courts answered our questionnaire. In addition to their perceptions regarding selected personnel practices, we asked for their views on the adequacy of staffing levels. We estimated that about 40 percent of the courts’ employees would agree or strongly agree that their work unit had a sufficient number of employees to do its job and about 49 percent would disagree or strongly disagree, while the remaining percentage would neither agree nor disagree, not be sure, or have no basis to judge. A similar question was asked of federal employees in a 1996 governmentwide survey. About 43 percent said their work unit had a sufficient number of employees, while about 47 percent disagreed or strongly disagreed and the remaining percentage neither agreed nor disagreed. The federal court system and some state court systems use formulas based on court workload data to determine satisfactory levels of staffing and resources. The federal court system uses a databased system to distribute resources to the U.S. Appellate, District, and Bankruptcy Courts, and the Probation and Pretrial Services Offices. Prior to the adoption of this system, these courts had received resources on the basis of total number of people employed and cases heard in a given year. However, court administrators came to realize that this system did not distribute resources efficiently because it did not take into account that certain types of court activities will take more time and resources than other types of activities. The system is based on a large number of various workload statistics, which are fed into “work measurement formulas.” Based on the statistics and the formulas, a specific number of “work units” are allocated to each federal judicial district and circuit. Such work units consider tasks associated with the nature and types of cases, given a standard rate of efficiency (how much time and resources such cases should take). Each work unit is equal to a certain amount of money, the exact amount depending on overall budget levels. The managers of each court can use their allocation to hire or retain staff or to make capital purchases to improve court productivity. NCSC has developed and promulgated a “weighted caseload” system. Under this system, officials determine how much time is taken up by different types of cases in the court or courts under study. The officials then determine how much judge or staff time is taken up by the court’s caseload as “weighted” by the time factor and compare total judge or staff years, as calculated, with the actual judges or staff available. How much time is taken up by a certain type of case can be determined by actual measurement of cases in court or by the “Delphi” method of getting judges, staff, or outside experts to estimate the length of time certain cases would take. For example, if a court had five full-time staff members (and thus 5 staff years), and it was determined that the court’s annual workload would take 7 staff years to complete, there would be a need for two additional staff members. However, it could also be found that the court’s workload required only 4 staff years, in which case there would be one more staff member than needed. This system was developed to give state government decisionmakers an independent, objective way to evaluate the need for court personnel, based on the actual amount of time and resources different court activities should take. It has been used in at least 11 states, in most instances to determine how many judges are needed in a state. However, the method has been used to determine appropriate levels of court staff in New Jersey, Colorado, and Kentucky. In Colorado, a weighted caseload study led to the addition of 30 to 40 court staffers statewide. In Kentucky, NCSC found that the level of case processing staff was appropriate in rural counties but that urban areas may need more staff. In promulgating this method and advocating its use, NCSC acknowledges that decisions on the size of a court or court staff cannot, and should not, be based solely on results obtained by a statistical model. Data from the model, according to NCSC, must be interpreted in a social, cultural, and political context, and factors peculiar to each court and court circuit should be considered. We asked the D.C. courts for their views on having a databased analysis done on the staffing of the courts, what advantages or disadvantages there would be in such an analysis, and whether there were features and qualities of the D.C. courts that would preclude such an analysis. The Chief Judge of the Superior Court had no objection to databased analysis of the court’s support staff, based on the work protocols used in the federal courts, provided that the actual instrument used was developed under the court’s supervision. He believed such an analysis could be helpful to more accurately measure the appropriate workforce for divisions that did not process cases and to check the present case processing measurement used in the clerk’s office. The Chief Judge of the Court of Appeals, in her response, indicated that NCSC has had a great deal of experience with methodologies that might be effectively used for such a study and, therefore, is in a better position to identify the advantages and disadvantages of each. According to the chief judge, the primary determinants for a court system in doing such studies are resource related, and such studies seem to address discrete components of the court. The chief judge pointed out that the D.C. courts were a two-tiered system as opposed to the three-tiered system that exists in most states. She said that whether there are unique features that would impede such an analysis in the D.C. courts might depend upon which area of the courts is selected for study. The chief judge added that, given the small size of the Court of Appeals, the cost of any study of it would likely outweigh the benefits. While D.C. court officials apparently consider caseload data in judging whether staffing levels are adequate, they have not measured the amount of time required by case processing staff to process differing types of cases nor used such data to determine whether the size of the courts’ workforce is inadequate, adequate, or excessive for the work of the courts. Workload and formula-based methods of assessing the adequacy of case processing staffing levels do exist and are in use in the federal and state courts. Whether the D.C. courts have too many or too few case processing staff for their caseload is a question that cannot be answered without a systematic study. The Chief Judge of the Superior Court believed that such a study employing the work measurement protocol used in the federal judiciary could be helpful. The Chief Judge of the Court of Appeals questioned whether the small size of the appeals court would make a study of the court cost-beneficial. While we believe that the results of such a study could serve as a baseline for the courts and Congress to evaluate court staffing in the future, we recognize that in planning such a project the D.C. courts should weigh potential costs and benefits in determining the project’s scope, including the components that should or should not be covered. We recommend that the D.C. courts review the amount of time required to process different types of cases and analyze other elements of the courts’ workload to determine what staffing levels are sufficient to process the D.C. courts’ caseload. We recommend that, before planning or implementing such a review (including selecting components to be covered and balancing costs and benefits), the courts consult with others who have used workload-based methodologies to evaluate court case processing staffing levels. The Chair of the Joint Committee on Judicial Administration of the D.C. Courts provided the courts’ written comments on a draft of this report. The courts said that they had no objection to conducting a study of staffing levels using a workload-based methodology, provided that funds are available. They asked that we include a request for such funds in our recommendation. The courts also pointed out that the methodology employed by NCSC, as explained in the draft, has been used in only three states and suggested that we make clear that the courts use a number of factors in assessing the need for staff, most of which are used in other jurisdictions. We recognize that the determinants for doing such a study include resource issues. Once the necessary planning for such a study is completed, including determining its scope, we believe it would be appropriate for the courts to consult with Congress on the matter of funding. To the best of our knowledge and that of NCSC, only three states have done such a study. However, this is not to say that other jurisdictions have not conducted databased studies to assess the adequacy of their staffing levels using similar methodologies. We did not examine that issue. Our objective was to point out that workload-based methodologies exist. The courts said that our description of its methodology for determining staffing needs was inaccurate, apparently because we did not refer to internal budget requests that are prepared by court divisions. We did not do so because the budget requests dealt with staffing needs only in an incremental manner. Where division managers requested individual staff, they did so to meet an increase in workload from the previous year, such as when the court took on a new statutory responsibility; in these cases, there was no attempt to review whether a division’s staffing level was appropriate for its overall workload. Our depiction of the courts’ methodology for assessing the adequacy of staffing levels was based on descriptions provided by officials of the Superior Court and Court of Appeals. The courts took exception to our reference in the draft report to previous reports issued by GAO recommending increased emphasis on workload planning by federal agencies. The courts believed that our reference to these reports is misleading and gives the impression that the courts were mandated to follow these earlier recommendations. The courts were not mandated to follow these earlier recommendations. We believe these reports provide useful context for our recommendations, and we specifically point out in the text of this report that the recommendations were made to certain federal agencies. The courts elaborated on the staffing and workload data provided in the draft report. They also provided technical clarifications as well as suggestions for the report’s presentation. We incorporated these as appropriate. We are providing a copy of this report to Representatives C.W. Bill Young, Chairman, and David Obey, Ranking Minority Member, Committee on Appropriations; Delegate Eleanor Holmes Norton, Ranking Minority Member, Subcommittee on the District of Columbia, Committee on Government Reform; Senators Kay Bailey Hutchison, Chairwoman, and Richard Durbin, Ranking Minority Member, Subcommittee on the District of Columbia, Committee on Appropriations; Senator George Voinovich, Chairman, Subcommittee on Oversight of Government Management, Restructuring, and the District of Columbia, Committee on Governmental Affairs; and Representative Julian Dixon. We are also providing copies to the District of Columbia courts, the National Center for State Courts, and the Administrative Office of the U.S. Courts. We will make copies available to others on request. If you have any questions, please call me at (202) 512-8676. Key contributors to the report included Richard W. Caradine, Domingo D. Nieves, and Steven J. Berke. The following are GAO’s comments on the letter from the Chair of the Joint Committee on Judicial Administration of the D.C. Courts, dated August 17, 1999: 1. The courts recommended a change in the title of our report. We believe that the title is objective and accurately reflects our conclusions and recommendation. 2. The courts noted an inconsistency between our characterization of the D.C. Court of Appeals caseload on page 1 and elsewhere in the body of the report. We have corrected page 1. 3. The courts suggested that wording used in the body of the report to discuss Court of Appeals staffing methodology be reproduced in the Results in Brief. The purpose of Results in Brief is to summarize the discussion in the body of the report; replicating the entire discussion in the body would be redundant. 4. The courts made a number of suggestions for technical revisions in the Background section of our report. We have made these revisions in the Background section of our report where appropriate. 5. The courts provided FTE data exclusive of judges and their law clerks and secretaries. We have adjusted table 1 on page 5 to reflect this data. 6. The courts commented that our presentation of workload data did not present a complete picture in that it did not reflect the causal factors that account for increased backlog. We included some of the courts’ explanation on page 6. 7. The courts suggested we include data on appeal filings in our presentation of Court of Appeals caseload data. We have done so in the text on page 7. 8. The Courts said that our recommendation should include a request that funds be appropriated to cover the cost of a databased study. We address this issue on page 12. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on personnel management in the District of Columbia (D.C.) courts, focusing on: (1) staffing and workload levels for the courts from 1989 through 1998; (2) how the courts evaluate the sufficiency of the levels of nonjudicial staff who work on processing and disposition of cases; and (3) a comparison of the D.C. courts' staffing methodology to other available methodologies. GAO noted that: (1) overall staffing levels in the D.C. courts increased between 1989 and 1990, declined slightly with some fluctuations through 1997, and then decreased below the 1989 level in 1998; (2) cases available for disposition increased slightly during this time in the D.C. Superior Court, the largest part of the courts, while its backlog increased substantially; (3) the cases available for disposition, and the backlog, of the far smaller D.C. Court of Appeals increased steadily over this period; (4) in both courts, the mix of different types of cases has changed over this period; (5) District of Columbia court officials said that they consider caseload data, along with other data, in judging whether staffing levels are appropriate; (6) according to court officials, staffing decisions are made on a year-by-year basis and are made individually for each division of the Superior Court and Court of Appeals; (7) the courts' methodology does not provide a comprehensive review of what staffing levels should be because it does not consider the amount of staff time and resources that are needed for case processing; (8) caseload trends alone do not show whether a unit is overstaffed or understaffed because they do not account for how much time is needed to process differing types of cases or for productivity improvements; (9) methodologies that consider the amount of staff time and resources required to process different types of cases in determining the sufficiency of staffing levels do exist; and (10) the National Center for State Courts has devised a database system to determine staffing levels needed for a given workload, and the Administrative Office of the U.S. Courts uses a database system to distribute resources among the federal courts.
|
In his March 5, 1997, directive to heads of executive departments and agencies, President Clinton noted that firearms claim the lives of children daily and were the fourth leading cause of accidental deaths among children ages 5 to 14. In order to have the federal government serve as an example of gun safety, the President required that a safety lock device be provided with each handgun issued to federal law enforcement officers, in part, to reduce unauthorized use of handguns and protect children from injury and death. In May 1997, the White House issued a memorandum to all chiefs of staff clarifying that the directive covers all firearms, not just handguns, issued to federal law enforcement officers. Generally, before the presidential directive was issued, many federal agencies, such as DEA, FBI, and INS, already had developed firearms safety policies and were training their employees on how to properly secure their firearms when not in use. Further, the curriculum at the Federal Law Enforcement Training Center specifically addressed the need for officers to use caution in securing firearms at home when children are present. Also, FBI and ATF officials told us that before the 1997 directive, their agencies were purchasing and distributing safety lock devices for all handguns issued to their agents. Similarly, National Park Service and Postal Inspection Service officials told us that before the 1997 directive, their agencies were providing—at the discretion of local management—lock boxes for home storage of firearms. The 1997 presidential directive did not mandate use of a particular brand or one specific type of safety lock device. However, the directive did require that when properly installed on a firearm and secured by means of a key or combination lock, the device should prevent the firearm from being discharged. The directive also allowed for locking mechanisms that are incorporated into the design of firearms rather than having to be attached. At the time of our review, issuances of firearms to law enforcement officers by Justice, Treasury, and the National Park Service collectively totaled approximately 72,000 weapons. As presented in table 1, this estimate consisted of about 61,000 handguns and about 11,000 shoulder weapons, such as shotguns and rifles. Also, as noted in the table, this estimate represented only those firearms that had been issued to officers and that could remain in their possession at all times, including at home. Thus, for example, the numbers shown in table 1 do not include firearms issued by the Bureau of Prisons, an agency whose weapons are available for official use during work schedules but are otherwise usually retained in secure agency facilities. In addition to agency-issued firearms, table 1 shows that Justice, Treasury, and the National Park Service have authorized for official use an estimated total of at least 26,000 personally owned firearms. According to the agency representatives we contacted, almost all of this total consists of handguns. The Postal Inspection Service, as table 1 shows, has issued an estimated 2,000 handguns to inspectors and authorized for official use an additional estimated 400 personally owned handguns. The Postal Inspection Service said it maintains all shoulder weapons in secure agency facilities when not in use. The total does not include firearms used by uniformed police officers who, after completing a daily tour of duty, leave their firearms in secure agency facilities. The Capitol Police, as table 1 shows, has issued about 1,100 handguns to officers but said it maintains all shoulder weapons in secure facilities when not in use. The Capitol Police does not authorize for official use any personally owned firearms. The Administrative Office of the U.S. Courts did not have aggregate data showing the number of firearms issued to or authorized for use by probation and pretrial service officers in the 94 federal judicial districts. The following two sections, respectively, discuss (1) implementation of the presidential directive by Justice, Treasury, and the National Park Service; and (2) voluntary implementation of firearm safety lock programs by the Postal Inspection Service, the Capitol Police, and federal judicial districts, which are not subject to the presidential directive. Justice, Treasury, and the National Park Service have developed and taken actions to communicate firearms safety lock policies. These organizations require that safety lock devices be provided with each permanently issued firearm. For example, according to knowledgeable officials, new agents are provided a safety lock device concurrently with issuance of a primary duty firearm. In addition, Justice, Treasury, and the National Park Service require that personally owned firearms authorized for official use be equipped with safety lock devices. Generally, these organizations do not require safety lock devices for firearms that are maintained in secure agency facilities and not taken home by employees. For example, except for the Marshals Service, Justice components do not require safety lock devices for firearms that are used for official daily duty tours or specific operations and returned to vaults or other secure agency facilities when not in use. These organizations do require, however, that firearms be equipped with safety lock devices when taken into home environments. At the time of our review, Justice, Treasury, and the National Park Service had updated, or were in the process of updating, their firearms policy manuals to reflect safety lock requirements. In addition, these organizations have used various means to inform their employees of applicable firearms safety lock policies. Initially, for instance, these organizations issued memoranda or other communications to field units informing them of the presidential directive. Subsequently, the organizations have included a section on safety lock devices in the firearms training provided to new agents. Also, some agency officials said that use of safety lock devices may be covered during the firearms requalification testing periodically required for agents. Further, we found that the firearms safety training course at the Federal Law Enforcement Training Center includes specific instruction on the presidential directive and safety lock devices. Our review of procurement records and other documents indicates that Justice and Treasury have purchased a sufficient number of safety lock devices for the total number of firearms issued to individual employees. Regarding the National Park Service, except for documentation of 200 safety locks purchased for new hires, we found that organizationwide safety lock device procurement documentation was not centrally available. However, headquarters officials told us that they had contacted several large parks, whose managers reported that a sufficient number of safety lock devices had been purchased for their employees. As table 2 shows, the executive branch organizations subject to the presidential directive we reviewed have purchased over 109,000 safety lock devices. Most frequently, universal safety lock devices—which can be used on a variety of firearm types—were purchased, at a unit cost ranging from about $4.00 to $10.00. Also, Justice, Treasury, and National Park Service officials told us that their respective organizations tested the safety lock devices before purchasing them to ensure that (1) discharge was prevented when the devices were properly attached, and (2) the devices could not be easily removed without having a key or knowing the combination. Knowledgeable Justice and Treasury officials told us that safety lock devices generally have been issued to all appropriate employees according to the respective organization’s policies. The National Park Service had no centrally available, organizationwide data on its firearms safety lock program. However, as indicated above, headquarters officials told us they contacted several large parks, whose managers reported that safety locks devices had been distributed to their employees. Justice, Treasury, and National Park Service officials told us that their distributions of safety lock devices were accompanied with written instructions for properly attaching the devices to applicable firearms. Our review of safety lock program documentation confirmed that written instructions were available at all of the organizations we contacted. An implementation issue that reflects differences among executive branch organizations is the funding of safety lock devices for personally owned firearms authorized for official use. Within the Justice Department, for example, DEA’s policy is to require its officers to pay for safety locks for these firearms; INS purchases safety locks for personally owned firearms authorized for use by its officers. Also, in September 1998, the FBI made a funding policy decision to pay for safety locks for such firearms used by its officers. As table 2 shows, the FBI has by far the largest estimated number (20,000) of personally owned firearms authorized for official use. Treasury components that authorize personally owned firearms for official use purchase safety lock devices for these firearms. The National Park Service requires employees to pay for safety lock devices for these firearms. The Postal Inspection Service—a law enforcement component of the U.S. Postal Service, an independent establishment of the executive branch—is not subject to the presidential directive, according to the White House Domestic Policy Council. However, the Postal Inspection Service voluntarily initiated a safety lock program in response to the presidential directive. As of August 1998, the Postal Inspection Service employed about 2,200 inspectors and about 1,400 uniformed police officers. The agency requires safety locks for all permanently issued firearms and personally owned firearms authorized for official use by its inspectors. The agency does not require safety locks for the firearms used by its uniformed police officers because these weapons are maintained in secure agency vaults and are not taken home. The agency initially issued a memorandum informing its employees about the safety lock program and currently is updating its firearms policy manual to reflect safety lock requirements. At the time of our review, the Postal Inspection Service had purchased 2,500 safety locks, which represent a sufficient number for all permanently issued and personally owned firearms authorized for official use by its inspectors. According to the Inspector-in-Charge of the agency’s training academy, safety lock devices have been issued with written instructions to all inspectors. Also, the Inspector-in-Charge said that agency staff (1) discuss safety lock policy during semiannual firearms requalification tests; and (2) ensure, during annual personal property inventory checks, that each inspector has an appropriate number of firearms safety locks. The Capitol Police, a legislative branch agency, is not subject to the presidential directive. However, the Capitol Police has voluntarily begun its own firearms safety efforts, in part, in response to the presidential directive. At the time of our review, the agency was in the process of updating its firearms policy and purchasing keyed safety lock boxes—at a cost of about $25 each—for home storage of firearms issued to its approximately 1,100 police officers, according to the Deputy Chief of the Administrative Services Bureau. Federal district courts, which, as part of the judicial branch, are not subject to the presidential directive, employ probation and pretrial service officers. As of August 1998, the 94 federal judicial districts employed a total of 3,805 probation officers and 574 pretrial service officers, according to a senior official at the Administrative Office of the U.S. Courts. The Administrative Office has provided the 94 judicial districts guidance on the use of firearms, including a suggestion that safety lock devices be used. However, according to the Administrative Office, each district has the discretion to establish its own firearms policy, including whether to authorize probation and pretrial service officers to carry firearms. In response to our inquiry, the Administrative Office had no readily available summary or overview of all districts’ current policies and practices. However, a senior Administrative Office official noted that as of July 1998, at least 10 of the 94 federal judicial districts prohibited officers from carrying firearms. We provided a draft of this report for review and comment to the Departments of Justice and the Treasury, the National Park Service, the Postal Inspection Service, the Capitol Police, and the Administrative Office of the U.S. Courts. We received comments during the period September 2 to 14, 1998. Generally, the various agencies provided technical comments and clarifications, which have been incorporated in this report where appropriate. We received comments from the Departments of Justice and the Treasury, which indicated that the draft was reviewed by representatives from the Bureau of Prisons, DEA, FBI, INS, the Marshals Service, ATF, the Customs Service, IRS, the Secret Service, and the Department of the Treasury’s Office of Enforcement. The Departments generally concurred with the substance of the report. Officials from one Treasury component, the Customs Service, believed that we had not given enough credit for their efforts to account for firearms because we allowed other agencies to provide estimates of firearms data. The Customs Service noted that it and some other agencies had developed and maintained life-cycle accountability over firearms via automated or other tracking systems. We agree that all agencies should be able to account for the firearms they use but, given our short time frame, some agencies were unable to provide specific data on the number of firearms that may be taken home by officers. However, these agencies did provide a credible basis for their estimates and said that they could have provided exact data if given more time to respond to our request. Generally, we believe that the report adequately identifies and caveats the sources of firearms data presented. We received comments from the Deputy Chief Ranger, National Park Service headquarters; the Deputy Chief Inspector (Administration), Postal Inspection Service; the Deputy Chief of the Administrative Services Bureau, Capitol Police; and the Chief of the Office of Program Assessment, Administrative Office of the U.S. Courts, all of whom agreed with the substance of the report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of committees and subcommittees with jurisdiction over law enforcement issues; the Attorney General; the Secretary of the Treasucry; the Postmaster General, U.S. Postal Service; the Director, National Park Service; the Director, Administrative Office of the U.S. Courts; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. Major contributors to this report are listed in appendix II. Please contact me on (202) 512-8777 if you or your staff have any questions. As agreed with the requesters’ offices, our objectives were to determine (1) the number of firearms currently issued to or used in an official capacity by employees at selected federal law enforcement organizations and (2) how selected federal organizations have implemented President Clinton’s March 1997 directive regarding firearm safety locks. This directive, and a subsequent clarifying memorandum, required executive departments and agencies to (1) develop and implement a policy requiring that safety locks be provided for all firearms issued to federal law enforcement officers, (2) inform all such officers of the policy, and (3) provide instructions for the proper use of safety locks. To address these objectives, we obtained firearm and safety lock data from five Justice Department components—the Immigration and Naturalization Service (INS), the Bureau of Prisons, the Federal Bureau of Investigation (FBI), the Drug Enforcement Administration (DEA), and the U.S. Marshals Service; four Treasury Department components—the U.S. Customs Service, the Internal Revenue Service (IRS), the U.S. Secret Service, and the Bureau of Alcohol, Tobacco and Firearms (ATF); and the National Park Service. Each of these 10 executive branch organizations, according to a 1996 survey by the Bureau of Justice Statistics, employed 1,000 or more law enforcement officers (see table I.1). Overall, as the table shows, the law enforcement officers in these organizations make up about 82 percent of the total number of civilian law enforcement officers in the federal workforce. At the behest of the requesters, we also obtained data, when available, on the number of firearms issued or authorized by 3 federal organizations (each with 1,000 or more law enforcement officers) not subject to the presidential directive and their safety lock policies and practices. These organizations were the Postal Inspection Service (an enforcement component of the U.S. Postal Service, an independent establishment of the executive branch); the U.S. Capitol Police (legislative branch); and, collectively, the federal district courts (judicial branch). Thus, by including these three organizations in the scope of our work, along with Justice, Treasury, and the National Park Service, we encompassed in our review all organizations that employ 1,000 or more federal civilian law enforcement officers and, thereby, covered about 92 percent of the total number of federal civilian law enforcement officers. (See table I.1.) Table I.1: Number of Federal Civilian Law Enforcement Officers by Agency (as of June 1996) The Bureau of Justice Statistics (1) defined a federal law enforcement officer as any full-time employee having the authority to make arrests and carry firearms and (2) excluded law enforcement officers serving in foreign countries or U.S. territories and those employed by the U.S. Coast Guard and the U.S. Armed Forces. Among others, these agencies include Justice and Treasury components, such as Offices of Inspector General, the U.S. Mint, and the Bureau of Printing and Engraving, that each employ fewer than 1,000 law enforcement officers. To obtain information on the number of firearms issued to law enforcement officers in Justice, Treasury, the National Park Service, the Postal Inspection Service, the U.S. Capitol Police, and federal district courts, we reviewed relevant available records and databases and/or interviewed headquarters officials knowledgeable about their respective organizations’ firearms inventories, databases, policies, and practices. We requested the organizations to break down their firearms data by gun type, i.e., handguns and shoulder weapons (e.g., rifles and shotguns). We focused specifically on the number of firearms issued to individual employees—i.e., firearms that can remain in the employees’ possession at all times and be taken home—rather than firearms that are available for official use during work schedules or specific operations but otherwise retained in field office vaults or other secure facilities. When specific data regarding issuances of firearms to individual employees were not available, we requested and obtained estimates based on the organization’s firearms policies and practices, number of current enforcement officials, and other relevant information. Regarding Justice components, for example, DEA and the FBI each have a centralized firearms database; however, the databases do not include specific information indicating whether the firearms are issued to individuals versus retained in field office vaults for use in specific operations. Thus, the firearms training units of DEA and the FBI provided us estimates of the number of firearms issued to their respective agencies’ employees. We also sought to determine the number of firearms that were personally owned by law enforcement officers and authorized for official use. When organizationwide data on the number of personally owed firearms authorized for official use were not readily available, we requested and obtained, when possible, estimates based on the organization’s firearms policies and practices, number of current enforcement officials, and/or other relevant information. We did not independently verify the reliability of the firearms-related data provided to us. However, we did question agency officials about reliability issues, such as how data were collected, what tests or general system reviews were conducted to ensure that data were accurate, and whether any concerns had been raised about the reliability or accuracy of these data. The officials stated that firearms transactions data—e.g., records of purchases, issuances, transfers, and disposals—are entered into the applicable database system. To ensure accuracy, the officials said that the respective agency periodically conducts reviews or tests, such as comparing the results of annual physical inventories to database balances. Generally, the officials told us they had no particular concerns about the reliability or accuracy of their respective organizations’ firearms data. However, INS officials noted that the agency’s centralized database may underreport the total firearms inventory somewhat due to time lags in data input of approximately a few weeks. The officials explained that some field offices are unable to provide on-line updates and, thus, these offices send paper documents to the National Firearms Unit. This may result in some data not having been entered into the agency’s centralized database at the time of our request for firearms information. To determine how Justice, Treasury, and the National Park Service have implemented the presidential directive regarding firearm safety locks, we reviewed the directive, including any explanatory memoranda, that set out the basic requirements for compliance. We also interviewed an Associate Director in the White House Domestic Policy Council knowledgeable about the presidential directive. In addition, we interviewed appropriate officials at the organizations listed in table I.1 and obtained and reviewed relevant documentation regarding (1) their organizations’ implementation of firearm safety lock policies, (2) how their organizations informed law enforcement officers of these policies, and (3) what instructions their organizations provided for use of firearms safety locks. In addition, we interviewed officials and obtained and reviewed relevant documentation from the Federal Law Enforcement Training Center regarding how requirements of the presidential directive were incorporated into the Center’s curriculum. Also, to determine whether Justice, Treasury, and the National Park Service purchased an appropriate number of firearm safety locks based on each organization’s interpretation of the directive, we interviewed appropriate agency officials, obtained centrally available procurement documentation, and compared the number and type of safety locks purchased to the number and type of firearms subject to the directive, as defined by each organization. To determine whether the safety locks purchased were in compliance with the requirements of the directive, we (1) compared the descriptions of the types of locks purchased to the criteria specified in the directive and (2) interviewed appropriate organization officials regarding any testing conducted to ensure that the purchased locks successfully prevented the discharge of firearms and could not be easily removed without having a key or knowing the combination. To determine what policies and practices, if any, the Postal Inspection Service, the Capitol Police, and federal district courts had developed regarding firearm safety locks, we interviewed knowledgeable officials and reviewed relevant, centrally available documentation. For example, we interviewed the Inspector-in-Charge of the Postal Inspection Service’s training academy and obtained and reviewed documentation regarding the organization’s firearms safety lock policies, guidance, instructions, and procurements. Also, we interviewed the Deputy Chief of the Capitol Police’s Administrative Services Bureau and obtained and reviewed documentation regarding the agency’s firearms safety policies and practices. Regarding federal judicial districts, we contacted the Office of Program Assessment within the Administrative Office of the U.S. Courts to obtain and review applicable guidance on the use of firearms safety locks. Due to time constraints and the number of organizations included in our review, we did not visit field offices to (1) obtain additional or more specific firearms data or (2) verify that the correct number and type of safety lock devices had been distributed to all appropriate employees according to the respective organization’s policies. Geoffrey R. Hamilton, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed federal agencies' compliance with President Clinton's March 1997 directive regarding child safety locks for firearms, focusing on: (1) firearms that have been issued to or are used in an official capacity by employees at selected federal law enforcement organizations; and (2) selected federal law enforcement organizations' implementation of the presidential directive. GAO noted that: (1) the three executive branch organizations GAO reviewed that are subject to the presidential directive--the Department of Justice, the Department of the Treasury, and the National Park Service (NPS)--have issued about 72,000 firearms to employees, with Justice issuances accounting for about 49,000 of the total; (2) additionally, these organizations have authorized for official use about 26,000 personally owned firearms; (3) of this total, Justice authorizations accounted for about 23,000; (4) the Postal Inspection Service has issued about 2,000 firearms to employees and authorized about 400 personally owned firearms for official use; (5) the Capitol Police have issued approximately 1,100 firearms to officers but does not allow personally owned firearms to be used for official duties; (6) the Administrative Office of the U.S. Courts had no readily available, centralized data showing the number of firearms issued to or used by federal judicial district officers; (7) Justice, Treasury, and NPS officials told GAO their organizations have taken appropriate steps to implement the presidential directive; (8) in implementing the presidential directive, these organizations are requiring that all firearms issued to employees, as well as personally owned firearms authorized for official use, be equipped with safety lock devices; (9) generally, these organizations did not require safety lock devices for firearms that are maintained in secure agency facilities and not taken home by employees; (10) GAO's review verified that these organizations have developed and taken actions to communicate a safety lock policy to their law enforcement officers; (11) the Postal Inspection Service has voluntarily developed and taken actions to communicate a safety lock policy to its law enforcement officers and has purchased safety lock devices for all permanently issued and personally owned firearms authorized for official use by its inspectors; (12) the Postal Inspection Service does not require safety lock devices for firearms that are maintained in secure agency vaults and not taken home; (13) the Administrative Office of the U.S. Courts has provided the 94 federal judicial districts guidance on the use of firearms; and (14) however, according to the Administrative Office, each district has the discretion to establish its own policies, and the Administrative Office had no readily available summary or overview of current policies and practices in all districts.
|
The Secretary of the Interior is responsible for administering the government's trust responsibilities to tribes and Indians, including managing about $3 billion in Indian trust funds and administering about 54 million acres of Indian land. Management of the Indian trust funds and assets has long been characterized by inadequate accounting and information systems; untrained and inexperienced staff; backlogs in appraisals, ownership determinations, and recordkeeping; the lack of a master lease file and an accounts receivable system; inadequate written policies and procedures; and poor internal controls. To address these long-standing problems, the Congress created the Office of the Special Trustee for American Indians (OST) and required the Special Trustee to develop a comprehensive strategic plan for trust fund management. In April 1997, the Special Trustee submitted a strategic plan to the Congress, but Interior did not fully support the plan. At this Committee’s July 1997 hearing on the Special Trustee’s strategic plan, we testified on the results of our analysis of the strategic plan and provided our assessment of needed actions related to implementation issues that we had identified during that analysis. On August 22, 1997, the Secretary of the Interior indicated that he and the Special Trustee for American Indians had agreed on the problems that needed to be solved immediately and called for the development of a high level implementation plan within 60 days. The High Level Plan was issued about 11 months later on July 31, 1998. In developing the High Level Plan, Interior did not prepare a documented analysis of its mission-related and administrative processes. Rather, it took the problems identified in the Secretary’s memorandum one by one and proposed separate projects to address each. Later, at the Secretary’s direction, an additional project was added. The 13 separate projects are shown in table 1. The projects are directed at improving systems, enhancing the accuracy and completeness of Interior’s data regarding the ownership and lease of Indian lands, and correcting deficiencies with respect to records management, training, policy and procedures, and internal controls within 3 years. For each project, the plan assigns management responsibility and identifies some supporting tasks, critical milestones, and resource estimates. Some of the projects are already being implemented. For example, a new Trust Funds Accounting System has already been deployed at several Interior sites. We did not assess the status or effectiveness of this project or other individual projects. Instead, we focused on whether Interior has assurance that the information technology aspects of the plan, which are essential to the success of the majority of the projects and therefore the overall plan, were properly planned and executed. Interior estimates that it will spend $147.4 million from fiscal years 1997 through 2000 on this effort. About $60 million of this amount is to be spent on developing and improving information systems, $54 million on data cleanup, $17 million on records management, $8 million on training, and $8 million on all other activities. The objectives of our review were to assess whether Interior has reasonable assurance that (1) the High Level Plan provides an effective solution for addressing long-standing problems with Interior’s Indian trust responsibilities and (2) its acquisition of a new asset and land records management service will cost effectively satisfy trust management needs. To determine whether Interior has reasonable assurance that the High Level Plan provides an effective solution for addressing Interior’s long- standing problems with its Indian trust responsibilities, we reviewed the Clinger-Cohen Act of 1996 and current technical literature as a basis for assessing the information technology aspects of the High-Level Plan; reviewed the process that was used to develop the plan; reviewed the Strategic Plan that was produced by Interior’s Special Trustee for American Indians; met with senior Interior officials responsible for developing the plan, including Interior’s Chief Information Officer, Chief Financial Officer, Deputy Special Trustee, and the Interior contractor who assisted in the development of the plan; and analyzed the High Level Plan for internal consistency and compliance with generally accepted best practices. We focused on the information technology aspects of the plan because they are essential to its success. To determine whether Interior has reasonable assurance that its acquisition of a new asset and land records management service will cost effectively satisfy trust management needs, we reviewed the Clinger-Cohen Act of 1996; federal policy governing acquisition efforts including Office of Management and Budget guidance and Federal Information Processing Standards; and other current literature to determine the statutory and administrative requirements and best practices that should be used in acquiring software-intensive services such as the asset and land records service; reviewed Interior documents relating to this acquisition, including the Request for Information, vendor responses, and the Request for Proposals. We did not review the selection process or documents produced as part of this process subsequent to the issuance of the Request for Proposals; and met with senior Interior officials responsible for acquiring the service, including Interior’s Chief Information Officer, Chief Financial Officer, Special Trustee, and the Interior contractor who assisted in the acquisition of the new service. We performed our work at the Department of the Interior, Office of the Special Trustee, and Bureau of Indian Affairs in Washington, D.C., from July 1998 through November 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of the Interior. On March 19, 1999, the Assistant Secretary for Policy, Management and Budget provided us with written comments, which are discussed in the “Agency Comments and Our Evaluation” section of this report and reprinted in appendix I. Despite the fact that Interior plans for its components to independently improve information systems or acquire information management services, at a cost of about $60 million, it has not yet defined an integrated architecture for Indian trust operations. The Clinger-Cohen Act requires the Chief Information Officer to develop and maintain an information systems architecture. Without a target architecture, agencies are at risk of building and buying systems that are duplicative, incompatible, and unnecessarily costly to maintain and interface. In 1992, we issued a report defining a comprehensive framework for designing and developing system architectures. This framework specifies (1) the logical or business component of an architecture which serves as the basis for (2) the technical or systems component. The logical component ensures that the systems meet the business needs of the organization. It provides a high-level description of the organization’s mission and target concept of operations; the business functions being performed and the relationships among functions; the information needed to perform the functions; the users and locations of the functions and information; and the information systems needed to support the agency’s business needs. The technical component ensures that the systems are interoperable, function together efficiently and are cost-effective over their life cycles. The technical component details specific standards and approaches that will be used to build systems, including hardware, software, communications, data management, security, and performance characteristics. Experience shows that without a target architecture, agencies risk building and buying systems that are duplicative, incompatible, and unnecessarily costly to maintain and interface. For example: In February 1997, we reported that the Federal Aviation Administration’s (FAA) lack of a complete architecture resulted in incompatibilities among its air traffic control systems that (1) required higher-than-need-be system development, integration, and maintenance costs and (2) reduced overall system performance. Without having architecturally defined requirements and standards governing information and data structures and communications, FAA was forced to spend an additional $38 million to acquire a system dedicated to overcoming incompatibilities between systems. In May 1998, we reported that the Customs Service’s architecture was incomplete and ineffectively enforced, and that, according to a contractor, Customs components had developed and implemented incompatible systems, which increased modernization risks and implementation costs. In July 1997, we reported that because it lacked a system architecture, the Department of Education had made limited progress in integrating its National Student Loan Data System with other student financial aid databases. Moreover, without an architecture, the department could not correct long-standing problems resulting from a lack of integration across its student financial aid systems. In July 1995, we reported that because its architecture was incomplete and did not define the interfaces and standards needed to ensure the successful integration of its Tax System Modernization projects, IRS was at increased risk of developing unreliable systems that would not work together effectively and would require costly redesign. Without an architecture for Indian trust operations, Interior has no assurance that the 13 projects delineated in the High Level Plan and the systems supporting them are cost-effective and are not duplicative, inconsistent, and incompatible. In fact, in reviewing the High Level Plan, we found indications that Interior was already encountering these problems. For example: Three weeks after the plan was issued, Interior recognized that TAAMS and LRIS were so closely related that they should be merged into a single project. The BIA Probate Backlog project and the OHA Probate Backlog project also appear to be closely related; however, Interior did not thoroughly analyze the relationship between these two efforts in formulating the High Level Plan and did not determine whether, like TAAMS and LRIS, they should be combined. The High Level Plan shows that the BIA Probate Backlog and the OHA projects depend on the TAAMS project to provide them with a case tracking system by the end of 1998. This system is to manage the flow of probate cases through BIA and OHA and enable management to identify resources needed to eliminate the backlog. However, in describing TAAMS, the High Level Plan does not mention the case management system. Further, according to Interior officials, development of the case tracking system under TAAMS is not scheduled to be funded until fiscal year 2000, and delivery is not planned before September 2000. Although Interior has already initiated several projects to “clean” data that will be used by TAAMS, it has not yet defined the data elements that this project needs. Until Interior defines the logical characteristics of its business environment and uses them to establish technical standards and approaches, it will remain at risk of investing in projects that are redundant and incompatible, and do not satisfy Indian trust management requirements cost effectively. In undertaking its effort to acquire a new asset and land record management service, Interior failed to follow a sound process for ensuring that the most cost-effective technical alternative was selected and reducing acquisition risks. Specifically, Interior did not adequately define important service requirements or sufficiently analyze technical alternatives. Further, Interior did not develop an overall risk management plan, require the contractor to demonstrate its system could work with Interior’s data and systems, or establish realistic project time frames. Interior intended to acquire TAAMS as a commercial-off-the-shelf (COTS) system. With this goal in mind, in May 1998, Interior issued a Request for Information. The responses from vendors were evaluated using a 15- category form. After this survey was completed, Interior decided to combine the TAAMS project with the LRIS project and to obtain the needed functionality of these combined projects by acquiring a trust asset information management service using a COTS system. Under this approach, a contractor would manage Interior-provided land and trust account data in a contractor-owned and maintained data center while Interior would perform its trust management functions by remotely accessing contractor-provided applications that run in the data center. To help ensure successful acquisition of a software-intensive service, information technology experts recommend that organizations establish and maintain a common and unambiguous definition of requirements (e.g., function, performance, help desk operations, data characteristics, security, etc.) among the acquisition team, the service users, and the contractor. The requirements must be consistent with one another, verifiable, and traceable to higher level business or functional requirements. Poorly defined, vague or conflicting requirements can result in a service which does not meet business needs or which cannot be delivered on schedule and within budget. Interior did not follow a sound process for defining requirements. First, Interior did not define high-level functional requirements for projects contained in the High Level Plan to help guide the requirements development process for each of the individual projects. For this effort, such high-level functional requirements might have included the following. The contractor’s system will contain the necessary data to support the financial information needs of the probate function. Records management policies and procedures will be consistent with departmental guidelines. Sensitive but unclassified data, such as data covered by the Privacy Act, will be encrypted in accordance with Federal Information Processing Standards whenever they are transmitted outside of the facility that generated the data. Data elements must conform to applicable departmental naming conventions and formats specified in the data dictionary. Automated records must be maintained in a form that ensures land ownership records can be traced back to the original source of the ownership. By not defining high-level functional requirements, Interior lacks assurance that the projects it develops and acquires will meet its business needs. Second, while Interior specified general service requirements in its request for proposal such as the need for the contractor to (1) administer all databases, (2) perform maintenance operations outside BIA’s normal working hours, (3) provide configuration management of data center hardware and software, and (4) perform daily, weekly, and monthly backup of operational data and archiving, it did not clearly specify all of BIA’s requirements, including its functional, security, and data management requirements. For example: While Interior stated that the system “shall include safeguards against conflicts of interest, abuse, or self-dealing,” it did not define these terms. A definition of these terms in the context of Indian trust operations is necessary to design and determine the adequacy of proposed system safeguards and approaches. In discussing system security, Interior (1) specified an inappropriate technology for encrypting data, (2) did not specify how long system passwords should be, and (3) did not require password verification features. Interior did not define key data management requirements, including what data elements were needed to meet Interior’s information requirements and whether existing systems contained the necessary data elements. The Clinger-Cohen Act requires agencies to establish a process to assess the value and risks of information technology investments, including consideration of quantitatively expressed projected net, risk-adjusted return-on-investment, and specific quantitative and qualitative criteria for comparing and prioritizing alternative information technology projects. Only by comparing the costs, benefits, and risks of a full range of technical options can agencies ensure that the best approaches are selected. Interior did not thoroughly analyze technical alternatives before choosing a vendor to provide the asset and land records management service. First, Interior did not assess the desirability of satisfying its requirements by (1) modifying existing legacy systems, (2) acquiring a COTS product and using existing Interior infrastructure resources, (3) building a system that would provide the necessary capability, or (4) acquiring a service. Second, in surveying the availability of COTS products, Interior did not perform a gap analysis which would systematically and quantitatively compare and contrast these products against Interior’s requirements based on functional, technical, and cost differences. Specifically, although Interior concluded based on the results of its Request for Information that none of the COTS products available from responding vendors would meet all its requirements, Interior did not determine, for each COTS product, which requirements could not be satisfied and how difficult and expensive it would be to make the needed modifications. For example, Interior did not determine whether all needed data elements could be represented conveniently and manipulated effectively by each COTS product. Third, in acquiring a service, Interior did not consider how its information, once it had been loaded into a contractor’s system, would be retrieved by Interior for subsequent use when the contract was terminated. Because Interior did not compare the costs and benefits of a full range of technical options, it has no assurance that it selected the most cost-effective alternative. According to information technology experts, a key practice associated with successful information technology service acquisitions is to formally identify risks as early as possible and adjust the acquisition to mitigate those risks. An effective risk management process, among other things, includes (1) developing an acquisition risk management plan to document the procedures that will be used to manage risk throughout the project, (2) conducting risk management activities in accordance with the plan (e.g., identifying risks, taking mitigation actions, and tracking actions to completion), and (3) preparing realistic cost and schedule estimates for the services being acquired. In acquiring its new TAAMS service, Interior did not carry out critical risk management steps. First, Interior did not develop a risk management plan. Without this plan, Interior has no disciplined means to predict and mitigate risks, such as the risk that the service will not (1) meet performance and business requirements, (2) work with Interior’s systems, and/or (3) be delivered on schedule and within budget. Second, in structuring a capabilities demonstration for the contractor’s system, Interior did not require the contractor to use Interior-provided data. Ensuring that the contractor’s system can work with data unique to Interior is important since some data elements, such as fractionated ownership interests, are not commonly used in the private sector. Third, in structuring the capabilities demonstration, Interior did not require the contractor to demonstrate that its system could interface with Interior’s Trust Fund Accounting System and a Mineral Management Service system. As a result, Interior will not know whether the contractor’s system can interoperate with its legacy systems. Fourth, Interior did not prepare a realistic project management schedule. Organizations following sound software acquisition practices would typically (1) identify the specific activities that must be performed to produce the various project deliverables, (2) identify and document dependencies, (3) estimate the amount of time needed to complete the activities, and (4) analyze the activity sequences, durations, and resource requirements. By contrast, Interior used the Secretary’s stated expectation that all Indian trust fund-related improvements should occur within a 3- year period beginning in 1998 as a starting point for developing the TAAMS project schedule. Because it did not establish clear requirements and did not take critical steps to manage risk effectively, Interior has no assurance that the new asset and land records management service will meet its specific performance, security, and data management needs or that the service can be delivered on schedule and within budget. Interior cannot realistically expect to develop compatible and optimal information systems without first developing an information systems architecture for Indian trust operations. If it proceeds to implement the projects outlined in the High Level Plan without taking these steps, individual improvement efforts such as the initiative to acquire a service for managing assets and land records may well incur cost and schedule overruns and fail to satisfy Interior’s trust management needs. To ensure that Interior’s information systems are compatible and effectively satisfy Interior’s business needs, we recommend that, before making major investments in information technology systems to support trust operations, the Secretary direct the Chief Information Officer to develop an information systems architecture for Indian trust operations that (1) provides a high-level description of Interior’s mission and target concept of operations, (2) defines the business functions to be performed and the relationships among functions; the information needed to perform the functions; the users and locations of the functions and information; and the information systems needed to support the department’s business needs, (3) identifies the improvement projects to be undertaken, specifying what they will do, how they are interrelated, what data they will exchange, and what their relative priorities are, and (4) details specific standards and approaches that will be used to build or acquire systems, including hardware, software, communications, data management, security, and performance characteristics. To reduce the risks we identified with the effort to acquire a service for managing assets and land records, we recommend that the Secretary of the Interior direct the Chief Information Officer to (1) clearly define and validate functional requirements, security requirements, and data management requirements, (2) develop and implement an effective risk management plan, and (3) ensure that all project decisions are based on objective data and demonstrated project accomplishments, and are not schedule driven. In its written comments on a draft of this report, Interior states that our oversight provides a valuable perspective and allows Interior to benefit from our experience in dealing with similar issues at other agencies. However, Interior disagrees with the report’s conclusions and does not indicate whether it will implement the recommendations. In disagreeing with the report’s first conclusion (that Interior does not have reasonable assurance that its High Level Plan for improving Indian trust operations provides an effective solution for addressing long-standing management weaknesses), Interior states that although it recognizes the importance of a formal architecture and does not yet have one, the “lack of a formal architecture is not a significant impediment to success in this case, given the use of proven COTS products.” Interior also expresses confidence because this effort is smaller than the modernization efforts that have failed at other agencies like FAA. This position is not valid. The decision to use COTS products does not compensate for the lack of an integrated information system architecture for Indian trust operations. Such an architecture would have identified and preferably reengineered the business functions of trust operations, and then mapped these into information systems to support the business functions. Just choosing COTS products from the marketplace does not accomplish the same purpose. In fact, the close relationship between business functions and IT is the reason we focus on all 13 projects in the High Level Plan as a whole, even though, as Interior points out in its comments, only 4 of the projects are information technology systems projects. Further, small efforts, like IRS’ $17 million Cyberfile project, as well as large ones, like FAA’s modernization, have failed due to poor program management, including lack of an architecture. With an estimated cost of $60 million for IT systems and an additional $54 million for data cleanup, the information systems supporting the 13 projects will have to be effectively managed if they are to succeed. Interior bases its decision to proceed with its IT acquisitions without a formal architecture (and without an estimated date for completing one) on the “pressing need for more responsive Indian trust systems.” However, moving to implement complex systems before developing an architecture does not expedite solutions. Instead, it greatly increases the chance of building duplicative systems, introducing potential integration problems, and perpetuating inefficient and overlapping business processes that currently exist in Indian trust operations. This is especially true in the case of TAAMS as Interior does not yet know whether the COTS product can effectively work with other Interior systems or with Interior-provided data. Also, as Interior notes in its comments, it consolidated TAAMS and LRIS from two separate projects into one because the “consolidation eliminated duplication within each system (80% of the data is shared), made better use of limited resources, and eliminated potential integration issues.” Similarly, Interior states that it is now considering streamlining the probate process and consolidating the BIA and OHA probate projects. Had Interior developed a sound architecture, it would have systematically identified the shared data and overlapping business processes before proposing either TAAMS and LRIS or BIA probate and OHA probate as separate projects in the High Level Plan. Moreover, it would have done the analysis needed to know whether other duplications and/or inconsistencies exist among its projects. Interior also disagrees with the report’s second conclusion that Interior does not have reasonable assurance that its acquisition of the new asset and land records management (TAAMS/LRIS) service will cost effectively satisfy trust management needs. Our report bases this conclusion on findings that Interior did not follow sound processes for defining TAAMS/ LRIS requirements, thoroughly analyzing technical alternatives before selecting an approach, or managing technical risk. Interior states that its requirements were adequately defined and that its requirements definition process consisted of conducting several requirements reviews with the end-user community and deciding “early on to adopt the business processes afforded through implementation of the COTS product.” Just as deciding to use COTS products does not compensate for the lack of an integrated system architecture for Indian trust operations, selecting a COTS product before thoroughly analyzing requirements does not constitute an effective requirements definition process. Further, while Interior says that it will adopt the business processes afforded through implementation of the COTS product, it has at the same time recognized that the COTS product does not meet all of its requirements and will have to be modified. For example, Interior must modify the COTS product to handle fractionated interests and title requirements that are unique to Indian ownership. Interior does not directly address the finding that it did not thoroughly analyze technical alternatives before choosing a vendor and a COTS product to provide asset and land records management services. As discussed in the report, these technical alternatives include (1) modifying existing legacy systems, (2) acquiring a COTS product and using existing Interior infrastructure resources, (3) building a system to provide the necessary capability, or (4) acquiring a service. Instead, Interior dismisses any use of the legacy systems, stating that the systems “. . . employ both outdated software products and processing techniques . . . ,” and “. . . would require a virtual rewrite;” does not address the second and third alternative at all; and states once again, without having performed a gap analysis, that “the use of COTS product, combined with a service bureau approach, does provide the Department an economical and timely solution.” Because it has not thoroughly analyzed all technical alternatives and does not have convincing, objective evidence to support its decision, there is no assurance that Interior has selected the most cost-effective alternative. Interior then describes several actions which it feels minimizes acquisition risk. Specifically, it “. . . established a risk management plan shortly after awarding the TAAMS contract”; will have other contractors review the work of the TAAMS contractor; and will evaluate the results of pilot testing. Because all of these actions occur after the vendor was selected and the contract awarded, they are not relevant to our finding that Interior did not follow a sound process for selecting an approach and, therefore, does not have reasonable assurance that its trust management needs will be met cost effectively. In its comments, Interior says “. . . a rigorous, standard approach was not used in identifying the requirements for TAAMS . . .”; and “. . . we would have preferred to use actual BIA data [in Operational Capabilities Demonstrations], but given the time constraints, we decided to use scripts . . .”. Further, Interior recognizes that it had to correct resulting errors identified in our report. Specifically, the Request for Proposal and/or the contract for TAAMS had to be changed to clarify terms such as “conflicts of interest, abuse, and self-dealing”; to correct the mistaken reference to Public Key encryption; and to require monthly delivery to the government of all data to facilitate import into other applications. However, because Interior does not explicitly recognize the flaws in its processes and does not acknowledge the relationship between these weaknesses and the errors that have already occurred, it has not committed to correcting these weaknesses and it is likely to repeat similar errors in the future. Interior also raises several subsidiary issues. It asserts that our review was incomplete because we did not assess the TAAMS vendor selection process, which, in Interior’s opinion, was necessary to determine if the TAAMS acquisition was cost effective. The objective of our audit was not to determine how Interior selected its vendor; it was to determine whether Interior had done the analysis needed to determine what was required and to select an approach to the project that would be cost-beneficial. How Interior selected its vendor is not relevant to that objective and was therefore not within the scope of our audit. Interior claims that we stopped the audit work “prematurely.” However, Interior does not cite any significant events that occurred or critical corrections made since the audit ended that would alter our conclusions. In fact, during the review, we evaluated every document provided by Interior. Moreover, this review was initiated, performed, and concluded after its objectives were completed according to its established schedule. The only deviation from schedule was made to accommodate Interior’s request for an additional 6 business days to comment on this report. Interior is concerned that we focused only on the TAAMS/LRIS project and therefore, were not in a position to make broad statements about the High Level Plan. In focusing on all IT aspects of the plan, we assessed the interrelationships of the individual 13 projects as well as the overall process for developing the plan. This enabled us to determine that Interior did not have reasonable assurance that the High Level Plan provides an effective solution for addressing its long-standing management weaknesses. We assessed the TAAMS/LRIS project because it was ongoing during our review, is one of the major IT projects in the High Level Plan, and illustrates fundamental problems with Interior’s approach. Finally, Interior states that once it deploys TAAMS, it will have the means to reengineer its business processes to the “industry standard.” This runs counter to the basic tenets of reengineering, that is, organizations should first reengineer business processes and then assess and acquire or build systems necessary to support those processes. This enables organizations to ensure that they implement optimal technical solutions and that they do not limit their business process alternatives or entrench themselves in ineffective ways of doing business. Interior needs to implement our recommendations to substantially reduce the risk to key IT systems in trust management operations. Interior’s comments are provided in their entirety in appendix I along with our detailed evaluation of them. We are sending copies of this report to Senator Daniel K. Inouye, Vice Chairman, Senate Committee on Indian Affairs and to Senator Robert C. Byrd, Senator Joseph I. Lieberman, Senator Ted Stevens, and Senator Fred Thompson, and to Representative Dan Burton, Representative George Miller, Representative David Obey, Representative Henry A. Waxman, Representative C.W. Bill Young, and Representative Don Young, in their capacities as Chairmen and Ranking Minority Members of the Senate Committee on Appropriations, Senate Committee on Governmental Affairs, House Committee on Appropriations, House Committee on Resources, and House Committee on Government Reform. We are also sending copies of this report to the Honorable Jacob J. Lew, Director, Office of Management and Budget, and to other interested congressional committees and Members of Congress. Copies will also be made available to others upon request. If you have any questions about this report, please call me at (202) 512- 6415. Other major contributors to this report are listed in appendix II. The following are GAO’s comments on Interior’s March 19, 1999, letter responding to a draft of this report. 1. According to Interior’s High Level Plan (page 70), five projects are classified as data cleanup projects: OST data cleanup, BIA data cleanup, BIA probate backlog, OHA probate backlog, and BIA appraisal program. According to the schedules provided in the High Level Plan (pages 64 through 67) OST data cleanup was initiated in January 1998 and BIA data cleanup project began in August 1998. 2. Our intent was to present the sequence of events chronologically, not to imply that there was a change in direction in the middle of the TAAMS acquisition. We clarified the language in the report to reflect this more precisely. 3. The report does not state that the High Level Plan should include all high-level requirements. Our report makes the point that the high-level requirements for all 13 projects were not defined anywhere. 4. Although Interior’s letter indicates otherwise, neither the RFP nor the amendment included any definitions for the terms “conflicts of interest, abuse and self-dealing.” In subsequent correspondence to us, Interior officials told us that they believe these terms are commonly used and do not require additional definition. However, Interior requires that TAAMS implement safeguards to identify incidents of conflicts of interest, abuse, and self-dealing. Precise definition of requirements, not assumptions about “common usage” for terms that by their nature are subject to broad interpretation, is needed to implement systems features effectively. 5. The TAAMS RFP states this requirement as follows: “Access to the system shall at a minimum require unique user IDs with passwords. The system shall record unsuccessful attempts . . .” The parenthetical phrase discussing password length does not appear. After receiving a draft of this report, Interior issued an amendment to the contract containing the phrase. This is another example of inadequate requirement definition that Interior is addressing piecemeal and ad hoc, without correcting the fundamental process weaknesses that caused the problem. 6. Section J of the RFP contains a collection of data elements from different legacy systems, but it is not a data dictionary for TAAMS. Because the data elements required by TAAMS were not defined prior to asking vendors to respond to the TAAMS RFP, Interior has no assurance that the vendor’s product can handle all data elements crucial to Indian trust operations. 7. We are not suggesting a priori that the legacy system is a viable solution. Neither we nor Interior can make informed decisions without analyzing relevant data. We are pointing out that, consistent with the Clinger-Cohen Act and good IT investment practices, Interior should have evaluated all technical alternatives before selecting one. 8. Interior has quoted this statement out of context. The full sentence from our draft report states: “Specifically, although Interior concluded based on the results of its Request for Information that none of the COTS products available from responding vendors would meet all its requirements, Interior did not determine, for each COTS product, which requirements could not be satisfied and how difficult and expensive it would be to make the needed modifications." Our point is that Interior did not perform a gap analysis on products available in the marketplace to determine whether the COTS approach was optimum. According to a Mitretek official, the Mitretek study was completed after the Request for Proposal was issued and was intended to serve as the government’s independent cost estimate for use in source selection. 9. Interior is in error. While all projects do, indeed, contain some elements of risk, our point was that Interior was incurring and not mitigating unnecessarily high levels of risk because it does not have an integrated architecture for Indian trust operations and has not corrected fundamental weaknesses in its IT management processes. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO evaluated the Department of the Interior's High-Level Implementation Plan for improving its management of the Indian trust funds and resources under its control, focusing on whether the Interior has reasonable assurance that: (1) the High-Level Plan provides an effective solution for addressing its long-standing problems; and (2) its acquisition of a new asset and land records management service will cost effectively satisfy trust management needs. GAO noted that: (1) Interior does not have reasonable assurance that its High-Level Plan for improving Indian trust operations provides an effective solution for addressing long-standing management weaknesses; (2) the plan: (a) recognizes the severity of long-standing weaknesses in managing trust fund assets; (b) identifies 13 projects intended to improve information systems, enhance the accuracy and completeness of its data regarding the ownership and lease of Indian lands, and address deficiencies with respect to records management, training, policy and procedures, and internal controls; and (c) assigns responsibility for oversight and management of the 13 projects; (3) however, Interior has not properly analyzed its information technology needs which are essential to the overall success of the plan; (4) until Interior develops an information systems architecture addressing all of its trust management functions, it cannot ensure that its information systems will not be duplicative or incompatible or will optimally support its needs across all business areas; (5) Interior also does not know whether its acquisition of a new service for managing Indian assets and land records will cost-effectively meet trust management needs; (6) before deciding to contract with a service vendor, Interior did not adequately define important service requirements or sufficiently analyze technical alternatives; (7) Interior also did not take the steps needed to minimize acquisition risks; (8) in particular, it did not develop a risk management plan, ensure that the vendor's system could work with Interior's data and systems, or establish realistic project timeframes; and (9) thus, Interior faces an unnecessarily high risk that the service will not meet its general business and specific performance needs, and it lacks the means for dealing with this risk.
|
Foreign banks have been cited as important providers of capital to the U.S. economy. According to Federal Reserve data, as of September 30, 2011, 216 foreign banks from 58 countries had banking operations in the United States. They held about $3.4 trillion, or about 22 percent of total U.S. banking assets; about 25.7 percent of total U.S. commercial and industrial loans; about 17.5 percent of total U.S. deposits; and about 14.9 percent of total U.S. loans. Foreign banks may operate in the United States under several different structures, which include branches, agencies, subsidiary banks, representative offices, Edge Act corporations, Agreement corporations, and commercial lending companies (see table 1). Most operate through branches and agencies because as extensions of the foreign parent bank, they do not have to be separately capitalized and can conduct a wide range of banking operations. Both domestic banks and U.S. subsidiary banks of foreign banks may be owned or controlled by a bank holding company. Holding companies are legally separate entities from their subsidiary banks, are subject to separate capital requirements, and are supervised and regulated by the Federal Reserve. In the United States, bank holding companies are common and function as the top-tier entity in the corporate structure. In many foreign countries, notably in Europe, the deposit-taking bank is the top-tier entity in the corporate structure and bank holding companies are less common. According to the Federal Reserve, as of September 30, 2011, there were 29 foreign-owned intermediate holding companies in the United States. This report focuses on changes to the capital requirements for these entities under the Dodd-Frank Act. Bank and thrift organizations are required to hold capital so that certain parties, such as depositors and taxpayers, would not be harmed if the bank or thrift faced unexpected substantial losses. There are many forms of capital, the strongest of which do not have to be repaid to investors, do not require periodic dividend payments, and are among the last claims to be paid in the event of bankruptcy. Common equity, which meets all of these qualifications, is considered the strongest form of capital. Weaker forms of capital have some but not all of the features of common equity. National banking regulators classify capital as either tier 1—currently the highest-quality form of capital and includes common equity—or tier 2, which is weaker in absorbing losses. Different entities within a banking organization may have different capital requirements. For example, a subsidiary bank and a broker-dealer in the same corporate structure may be required to hold different levels of capital, and those capital requirements are established and supervised by different regulators. In the 1980s, U.S. and international regulators recognized that common borrowers and complex products and funding sources had made the world’s financial markets increasingly interconnected. Regulators also acknowledged that bank regulatory capital standards generally were not sensitive to the risks inherent in a bank’s activities and that distressed or failing large, internationally active banks posed a significant global risk to the financial system. These concerns underscored the need for international regulatory coordination and harmonization of capital standards. As a result, in 1988 the Basel Committee on Banking Supervision (Basel Committee) adopted a risk-based capital framework known as the Basel Capital Accord (Basel I). Basel I aimed to measure capital adequacy (that is, whether a bank’s capital is sufficient to support its activities) and establish minimum capital standards for internationally active banks. It consisted of three basic elements: (1) a target minimum total risk-based capital ratio of 8 percent and tier 1 risk-based capital ratio of 4 percent, (2) a definition of capital instruments to constitute the numerator of the capital-to-risk weighted assets ratio, and (3) a system of risk weights for calculating the denominator of the ratio. While the framework was designed to help improve the soundness and stability of the international banking system, reduce some competitive inequalities among countries, and allow national discretion in implementing the standards, it did not explicitly address all types of risks that banks faced. Rather, it addressed credit risk, which the Basel Committee viewed as the major risk banks faced at the time. Over time it became apparent to bank regulators that Basel I was not providing a sufficiently accurate measure of capital adequacy because of the lack of risk sensitivity in its credit risk weightings, financial market innovations such as securitization and credit derivatives, and advancements in banks’ risk measurement and risk management techniques. The accord was revised and enhanced multiple times after 1988 because of its shortcomings. For example, in 1996, Basel I was amended to take explicit account of market risk in trading accounts. The market risk amendment allowed banks to use internal models of risks to determine regulatory capital levels. Table 2 identifies some key features of capital regime enhancements to the Basel accords. Basel II, adopted in June 2004, aims to better align minimum capital requirements with enhanced risk measurement techniques and encourage banks to develop a more disciplined approach to risk management. It consists of three “pillars”: (1) minimum capital requirements, (2) a supervisory review of an institution’s internal assessment process and capital adequacy, and (3) effective use of disclosure to strengthen market discipline as a complement to supervisory efforts. Basel II established several approaches (of increasing complexity) to measuring credit and operational risks. The “advanced approaches” for credit risk and operational risk use parameters determined by a bank’s internal systems as inputs into a formula supervisors developed for calculating minimum regulatory capital. In addition, banks with significant trading assets, which banks use to hedge risks or speculate on price changes in markets for themselves or their customers, must calculate capital for market risk using internal models. The advanced approaches allow some bank holding companies to reduce capital from the levels required under Basel I. Large internationally active U.S. holding companies are implementing the first qualification phase— known as the parallel run—of the Basel II advanced approaches. Although some of these large companies have begun to report Basel II capital ratios to their bank regulators, they still are subject to Basel I capital requirements, as are other U.S. banks. Financial institutions in most other industrialized countries are subject to the Basel II capital standards. In response to the 2007-2009 financial crisis, Basel II was amended in 2009 by Basel II.5 to enhance the measurements of risks related to securitization and trading book exposures. Also in response to the 2007-2009 financial crisis, in 2010, the Basel Committee developed reforms, known as Basel III, which aim to improve the banking sector’s ability to absorb shocks arising from financial and economic stress, whatever the source; improve risk management and governance; and strengthen banks’ transparency and disclosures. The reforms target (1) bank-level, or microprudential, regulation to enhance the resilience of individual banking institutions to periods of stress and (2) systemwide risks that can build up across the banking sector as well as the amplification of these risks over time. These two approaches to supervision are complementary, as greater resilience at the individual bank level reduces the risk of systemwide shocks. Specifically, Basel III significantly changes the risk-based capital standards for banks and bank holding companies and introduces new leverage and liquidity requirements. equity capital requirement of 4.5 percent of risk-weighted assets (the capital needed to be regarded as a viable concern); a new capital conservation buffer of 2.5 percent to provide a cushion during financial shocks to help companies remain above the 4.5 percent minimum; and more stringent risk-weights on certain types of risky assets, particularly securities and derivatives. Basel III also defines capital more narrowly than the previous accords. The new common equity tier 1 capital measure is limited mainly to common equity because common equity is generally the most loss-absorbing instrument during a crisis. Basel III: A Global Regulatory Framework for More Resilient Bank and Banking Systems, December 2010. U.S regulation of foreign-owned intermediate holding companies is intended to be equivalent to regulation of domestic counterparts to help ensure that foreign bank operations have the opportunity to compete on a level playing field in the U.S. market. Several laws enacted since 1978 have shaped the regulation of foreign-owned intermediate holding companies and other foreign-owned banking operations. The International Banking Act of 1978 (IBA) is the primary federal statute regulating foreign bank operations in the United States. In passing IBA, Congress adopted a policy of “national treatment,” the goal of which is to allow foreign banks to operate in the United States without incurring either significant advantage or disadvantage compared with U.S. banks. To implement this policy, IBA brings branches and agencies of foreign banks located in the United States under federal banking laws and regulations. IBA and subsequent laws and regulations give foreign banks operating in the United States the same powers and subject them to the same restrictions and obligations as those governing U.S. banks, with some adaptations for structural and organizational differences. For example, most foreign banks’ operations are conducted through branches, and they generally can engage in the same activities as branches of U.S. banks. However, the U.S. branches of foreign banks are prohibited by law from acquiring deposit insurance from FDIC, and therefore may not accept retail deposits, whereas branches of U.S. banks can. In 1991, Congress passed the Foreign Bank Supervision Enhancement This Act, which amended IBA, authorizes the Federal Act (FBSEA). Reserve to oversee all foreign bank operations in the United States. Foreign banking organizations seeking to establish subsidiaries, branches, or agencies in the United States must apply for an operating charter from either OCC (national charter or federal license) or state banking agency (state license). The Federal Reserve must also approve these applications. The Federal Reserve’s approval process involves determining the soundness of the foreign parent bank’s activities. Specifically, the Federal Reserve assesses, among other factors, the extent to which the home country supervisor (1) ensures that the foreign parent bank has adequate procedures for monitoring and controlling its activities globally, (2) obtains information on the condition of the foreign bank and its subsidiaries and offices outside the home country through regular reports of examination and audits, (3) obtains information on the dealings and relationships between the foreign bank and its affiliate companies, and (4) receives from the bank consolidated financial reports for analyzing the bank’s global financial condition. Another important requirement in the Federal Reserve’s approval process includes assessing the quality of supervision provided by the applicant’s home country supervisor. Specifically, the Federal Reserve determines the extent to which (1) the home country supervisor evaluates prudential standards, such as capital adequacy and risk asset exposure, on a global basis, and (2) the foreign parent bank is subject to comprehensive consolidated supervision—that is, the home country supervisor monitors the organization’s overall operations across all legal subsidiaries and national jurisdictions. If the Federal Reserve is satisfied with the bank applicant’s safety and soundness and the quality of the home country supervision, it can approve the foreign bank applicant (including its bank and nonbank affiliates) to do business in the United States. As the host country consolidated supervisor, the Federal Reserve retains full oversight authority over the foreign bank’s U.S. operations. 12 U.S.C. § 3105(d)(2)(A); see also item 15 of Attachment A to Federal Reserve Form FR K-2. International Applications and Prior Notifications under Subpart B of Regulation K. application if it found that the home country supervisor actively was working to establish arrangements for such supervision and all other factors were consistent with approval. FBSEA also established uniform standards for all U.S. operations of foreign banks, generally requiring them to meet financial, management, and operational standards equivalent to those required of U.S. banking organizations. For example, FBSEA required the Federal Reserve to establish guidelines for converting data on the capital of foreign banks to the equivalent risk-based capital measures for U.S. banks to help determine whether they meet the U.S. standards. Additionally, foreign banks’ U.S. operations must be examined regularly for unsafe or unsound banking practices and are subject to regulatory financial reporting requirements similar to those for their U.S. counterparts. The Gramm-Leach-Bliley Act permitted foreign and U.S. bank holding companies to become financial holding companies, which are authorized to engage in a wider range of financial activities (such as insurance underwriting and merchant banking) compared with bank holding companies. In response to the Gramm-Leach-Bliley Act, the Federal Reserve modified its long-standing practice of applying its capital adequacy standards to foreign-owned intermediate holding companies. Specifically, in its January 5, 2001, Supervision and Regulation Letter 01- 1, the Federal Reserve provided an exemption from complying with its capital adequacy guidelines (capital exemption) to foreign banks that are financial holding companies. The Federal Reserve’s supervisory letter stated that this action was consistent with its treatment of domestic banks and financial holding companies. Officials noted that domestic firms were expected to hold capital on a consolidated basis at the parent level, not the intermediate holding company level. According to the supervisory letter, the capital exemption recognized that the foreign parent bank should be able to hold capital on a consolidated basis on behalf of its subsidiaries. To qualify for the exemption, the foreign-owned intermediate holding company had to meet the standards for financial holding company status. Specifically, for a foreign bank to qualify as a financial holding company, the Federal Reserve was required to determine that the intermediate holding company’s parent foreign bank was well capitalized and well managed on a consolidated basis. Also, its U.S. depository subsidiaries were required to be well capitalized and well managed. The bank subsidiaries of foreign bank organizations still were subject to the capital adequacy framework (risk-based capital and leverage standards) for insured depository institutions. A relatively small number of foreign-owned intermediate holding companies have relied on the capital exemption. The Federal Reserve reported that 6 of the approximately 50 foreign-owned intermediate holding companies used the capital exemption (exempt holding company) at some point during the period from 2001 to 2010. At the time the Dodd- Frank Act was enacted, in July 2010, 5 foreign-owned intermediate holding companies were relying on the capital exemption. By the end of December 2010, 1 of these 5 holding companies restructured its U.S. operations and no longer relied on the capital exemption. Exempt holding companies generally have operated with less capital than their foreign and domestic peers in the United States, with 1 such institution operating with negative risk-based capital ratios. The Dodd-Frank Act eliminated the capital exemption that the Federal Reserve provided to certain foreign-owned intermediate holding companies. The act requires that after a 5-year phase-in period after enactment of the act, these companies must satisfy the capital requirements at the intermediate holding company level. The change requires capital in the United States to support the foreign bank’s U.S. operations conducted through a holding company and provides ready capital access for depositor and creditor claims in case the subsidiary depository or holding company fails and needs to be liquidated. According to FDIC, the elimination of the capital exemption also was intended to better ensure that the foreign-owned intermediate holding company served as a “source of strength” for the insured depository institution. Furthermore, according to FDIC, subjecting previously exempted foreign-owned intermediate holding companies to capital standards would discourage excessive financial leveraging. FDIC and some market participants have noted that the elimination of the exemption enhances the equal treatment of U.S. and foreign-owned holding companies by requiring both types of companies to hold similar capital levels in the United States. Figure 1, compares the capital structure of U.S.- and foreign-owned holding companies. Federal bank regulators have been finalizing proposed rules to implement the various capital requirements under the Dodd-Frank Act. According to regulators, they expect to issue final rules in 2012 but did not provide a specific date. The act requires that the previously exempted holding companies comply with the new capital adequacy guidelines by July 2015.authority to require any bank holding company to maintain higher levels of capital when necessary to ensure that its U.S. activities are operated in a safe and sound manner. This authority may be exercised as part of ongoing bank supervision or through the bank application process. We According to the Federal Reserve, it retains its supervisory describe the different ways in which the exempted companies can satisfy the new capital requirements later in this report. In addition to eliminating the capital exemption for certain foreign-owned intermediate holding companies, the Dodd-Frank Act requires that bank and thrift holding companies—domestic or foreign—meet minimum risk- based capital and leverage requirements that are not less than those that apply to insured depository institutions. The existing minimum capital requirements (general risk-based capital guidelines) for insured depository institutions are largely based on Basel I (see fig. 1). Certain institutions—the largest internationally active holding companies and insured depository institutions—are subject to the U.S. implementation of the advanced approaches in the Basel II framework (advanced approaches capital guidelines). These large internationally active institutions are required to use their internal models to determine their risk-based capital levels, but under the Dodd-Frank Act they generally cannot hold less capital than would be required under the general risk- based capital guidelines for insured depository institutions. These institutions will be required to calculate their capital under both the general risk-based capital guidelines and the advanced approaches capital guidelines. Risk-Based Capital Standards: Advanced Capital Adequacy Framework—Basel II; Establishment of a Risk-Based Capital Floor, 76 Fed. Reg. 37, 620 (June 28, 2011). tier 1 capital of bank holding companies. It is expected that the Federal Reserve will address such items in 2012. Finally, the Dodd-Frank Act also made changes that restricted the types of capital instruments that can be included in tier 1. Prior to the Act, the general risk-based capital guidelines for bank holding companies allowed such institutions to include hybrid debt and equity instruments in tier 1 capital whereas such instruments did not count in the tier 1 capital of insured depository institutions. Insured depository institution regulators (Federal Reserve, FDIC, and OCC) determined that such instruments did not have the ability to absorb losses as effectively as other forms of tier 1 capital. The specific requirements for the exclusion of hybrid debt or equity instruments from tier 1 capital vary according to the asset size and nature of the holding company. The elimination of the Federal Reserve’s capital exemption for foreign- owned intermediate holding companies likely will result in exempt holding companies restructuring or taking other actions, but the overall effects of this change on competition among bank holding companies and cost and availability of credit are likely to be small for various reasons. First, our analysis of loan markets suggests that eliminating the exemption likely would have a limited effect on the price and quantity of credit available because the four banks most affected have relatively small shares of relatively competitive U.S. loan markets. Second, our review of the academic literature and our econometric analysis suggest that changes in capital rules that could affect certain foreign-owned intermediate holding companies would have a limited effect on loan volumes, and the increase in the cost of credit likely will add minimally to the cumulative cost of new financial regulations. Foreign parent banks may take a variety of actions, including restructuring, to comply with the new requirements, although most are waiting for final rules on capital requirements and other Dodd-Frank Act– related provisions before making a decision. To date, banking and other financial regulators have not issued final rules implementing many of the Dodd-Frank Act requirements. Foreign bank officials we interviewed told us that they needed a better understanding of all the new regulatory provisions in the Dodd-Frank Act before deciding what action to take. Most of these bank officials told us they have been monitoring how regulators are implementing certain Dodd-Frank Act provisions, and the final rules likely will have a great effect on their decisions. These provisions include the designation and orderly liquidation of systemically important financial institutions (SIFI) and a prohibition on proprietary trading.provisions could have a major impact on her bank’s U.S. operations. One foreign bank official told us that implementation of these Additionally, questions about how the new Basel III accord and other global capital rules will be implemented and how they will interact with U.S. banking regulations have added to foreign banks’ uncertainty about planning for compliance with the Dodd-Frank Act. For example, in November 2011, the Basel Committee introduced a framework for designating global SIFIs. be required to hold additional capital to absorb losses to account for the greater risks that they pose to the financial system. Foreign bank officials we interviewed stated that it is too early to tell how new global requirements will interact with U.S. requirements under the Dodd-Frank Act. On November 4, 2011, the Financial Stability Board, which is responsible for coordinating and promoting the implementation of international financial standards (such as the Basel III accord), designated 29 financial institutions as global SIFIs. See http://www.financialstabilityboard.org/about/mandate.htm. holding companies that relied on it. These exempt holding companies and their foreign parent banks can comply in several ways. First, foreign parents could issue securities (debt or equity) and inject the capital as equity into the intermediate holding companies. Second, they could change the mix of risky assets they hold. For example, banks must hold more capital against certain assets in their portfolio that are considered higher-risk. The exempt holding companies could sell off these assets and acquire higher-quality or less-risky assets. Third, they could pass down profits or retain earnings from foreign parents to U.S. holding companies. Fourth, foreign parents could restructure their U.S. operations by removing any activities not considered banking activities from the exempt holding companies.the exempt holding companies and leave the U.S. banking market. Finally, the foreign parent banks could close One foreign parent bank restructured its exempt holding company by deregistering it in the fall of 2010. Prior to restructuring, the exempt holding company had a bank subsidiary, a broker-dealer subsidiary, and several other subsidiaries. The bank accounted for a small percentage of the exempt holding company’s consolidated assets and revenues, but the holding company would be subject to the new capital requirement because it was supervised as a bank holding company by the Federal Reserve. After the restructuring, the small bank became a subsidiary of one bank holding company, while the broker-dealer and the other nonbank entities became subsidiaries of a different holding company that is not a bank holding company and therefore not subject to bank holding capital requirements. The foreign bank stated that restructuring would better align both foreign parent bank and U.S. bank holding company with new capital requirements. How the four foreign parent banks with exempt holding companies choose to comply will vary. For example, officials from one exempt holding company told us that the foreign parent bank might inject several billions of dollars in common equity into the intermediate U.S. holding company. Officials from a second exempt holding company told us they were considering a combination of actions, including recapitalizing its holding company by retaining earnings, reducing the risky assets against which it must hold capital, and potentially restructuring the holding company. Officials from another exempt holding company said that it would review business activities under the holding company to reduce risky assets that would require holding higher amounts of capital. Finally, the fourth exempt holding company stated in its annual report to SEC that the holding company might restructure, increase its capital, or both. Given the size of the market and the holding companies affected, elimination of the capital exemption for foreign-owned holding companies under the Dodd-Frank Act likely will have limited effects on the overall competitive environment and the cost and availability of credit to borrowers. Our analysis assesses the impact of the four exempt holding companies exiting the U.S. banking market or raising additional capital to meet regulatory standards. The number of exempt holding companies and their shares of most national loan markets are small. As of December 31, 2010, four exempt holding companies relied on the Federal Reserve’s capital exemption. These exempt holding companies accounted for about 3.1 percent of the loans on the balance sheets of all bank holding companies in the United States (see table 3). Therefore, any actions they may take to respond to the elimination of the capital exemption likely will have a small effect on the overall credit market. Exempt holding companies accounted for varying amounts of different types of loans. In 2010, they accounted for less than 5 percent each of the construction and land loans, residential real estate loans, commercial real estate loans, commercial and industrial loans, consumer loans, and leases on the balance sheets of bank holding companies in the United States. However, they accounted for more than 10 percent each of agricultural real estate loans and agricultural production loans. Although exempt holding companies and their foreign parent banks can take a variety of approaches to comply with the new capital rules, the effects of those approaches on credit markets—overall or in specific segments— likely will be small because of the relatively small share of the market that exempt holding companies hold. U.S. credit markets likely would remain unconcentrated even if exempt holding companies exited the market and sold their loans to other bank holding companies. To assess the impact of eliminating the Federal Reserve’s capital exemption on competition among bank holding companies, we calculated the HHI, a key statistical indicator used to assess the market concentration and the potential for firms to exercise market power. As figure 2 shows, the HHI for the overall loan market for 2010 is well below 1,500—the threshold for moderate concentration—as are the HHIs for the 13 specific loan markets we analyzed. Because these loan markets appear to be unconcentrated, bank holding companies in these markets likely have little ability to exercise market power by raising prices, reducing the quantity of credit available, diminishing innovation, or otherwise harming customers as a result of diminished competitive constraints or incentives at least at the national level. As we discuss later, to the extent that markets are segmented by regions, or small businesses are limited in their ability to access credit, these results may not hold for all customers. Faced with the elimination of the Federal Reserve’s capital exemption and new minimum capital requirements under the Dodd-Frank Act, foreign banks with exempt holding companies could choose to divest their banks and exit the U.S. banking market. To estimate the effect of this particular response on loan market concentration, we estimated the change in loan market concentration on loan markets in two alternative scenarios in which all four of the exempt holding companies cease making loans and sell their portfolios to other bank holding companies. In the first scenario, the assets of exempt holding companies are acquired by remaining bank holding companies in proportion to their market share. In the second scenario, the assets of exempt holding companies are acquired by the largest bank holding company remaining in the loan market. Since not all exempt holding companies are likely to exit the U.S. market, these scenarios provide estimates of the effect of the elimination of the Federal Reserve’s capital exemption on market concentration in the most extreme cases. Estimated changes in the HHIs for the overall loan market in these alternative scenarios indicate that the overall loan market is unlikely to become concentrated even if all exempt holding companies exited the U.S. market. As figure 2 shows, the overall loan market remains unconcentrated in both scenarios, suggesting that the remaining bank holding companies still would not have sufficient potential to use market power to increase loan prices above competitive levels or reduce the quantity of loans available to borrowers. Similar results were obtained when we applied the alternative scenarios to various segments of the credit market. The total capital that the four exempt holding companies would need to raise to meet the same capital standards as their domestic counterparts is small relative to the total capital in the U.S. banking sector, thus limiting the effect on the cost and availability of credit. Of the four exempt holding companies remaining at the end of 2010, three have indicated they might undertake actions to comply with the minimum capital standards. As table 4 shows, to be considered as meeting the minimum capital requirements under the Dodd-Frank Act, the three exempt holding companies collectively would need $3.2 billion in additional capital, only $530 million of which would need to be in the form of tier 1 common equity to meet the leverage ratio requirement. This amount is less than 0.21 percent of the approximate $1.5 trillion in total equity outstanding for the U.S. banking sector. Two of the exempt holding companies have sufficient tier 1 capital and would be able to meet the total capital requirement by raising cheaper supplementary capital. If the exempt holding companies decided to exceed the minimum requirements and meet the equivalent of the well-capitalized requirements for banks and thrifts, the difference, $6.6 billion, would be less than 0.44 percent of the total equity outstanding. Although this is a sizable capital deficit at the individual holding company level, it would represent a small shock to the aggregate U.S. banking sector. The remaining exempt holding company (company 4, in table 4) would be significantly below the new minimum capital requirements, with a capital shortfall of over $21.5 billion. However, domestic loans make up 11 percent of its total assets, while its broker-dealer operations are much larger. Therefore, maintaining a holding company designation, which creates a significant capital requirement on its entire asset pool, appears unlikely. As discussed earlier, the company has stated that it has been considering a variety of options, including restructuring. A restructuring may reduce the consolidated capital requirements applied to the foreign holding company and thus mitigate the need to raise capital to meet the new minimum capital requirements. See appendix II for further discussion of the effects of reducing assets on the availability of credit. variables to shocks to bank capital.any particular manner of adjustment by the holding companies but focuses on the ultimate impact on loan volumes and spreads. The methodology does not assume Although the econometric model we developed indicates that stronger capital requirements negatively affect lending activity, the impacts at the aggregate level are small. We evaluated the impact of the new requirements using two scenarios—exempt holding companies experienced a capital deficit when compared with the (1) minimum capital requirements under the Dodd-Frank Act or (2) the well-capitalized standard that applies to banks and thrifts. Specifically, our model suggests the elimination of the capital exemption would lead aggregate loan volumes to decline by roughly 0.2 percent even if the affected exempt holding companies desired to meet the equivalent of the well- capitalized standard (see table 5). If the affected banks desired to meet the minimum capital requirements under the Dodd-Frank Act, loan volumes would decline by less than 0.1 percent. Because the exempt holding companies would face capital deficits, the impact on the affected banks could be significant and would vary with the degree of undercapitalization. For example, loan growth would decline by 5.0 percentage points at company 1, 6.6 percentage points at company 2, and 7.9 percentage points at company 3 if the targeted total capital ratio was 10 percent under the well-capitalized standard, and total loan volumes would fall by $14.2 billion, or 0.2 percent of total loans for the banking sector. If the affected banks’ targeted total capital ratio was 8 percent (that is, the minimum capital requirement), our model suggests total loan growth at the three banks would decline by $6.8 billion, or 0.09 However, these estimates percent of total loans for the banking sector.may overstate the impact on aggregate loan volumes because we assume no transition period for adjusting to the higher capital requirements and that other banks do not immediately replace the decline in loan volumes at the affected institutions. Because the capital exemption affects only a few institutions operating in highly competitive loan markets, the impact on the cost of credit, although uncertain, is likely to be small. Our model suggests that a capital shock equivalent to that implied by the elimination of the capital exemption (small at the aggregate level) would lead to an industrywide increase in lending spreads of a little over 1 basis point (0.01 percentage points). If the exempt companies were measured against the minimum capital requirements, the impact on lending spreads would be less than 1 basis point. However, because the elimination would not result in a general shock across all banks, whether any impact on lending rates would be felt at the aggregate level is unclear. The competitive nature of loan markets makes passing on the higher cost of holding more capital to borrowers in the form of higher loan rates difficult for a bank experiencing a firm- specific capital shock. Because the loan markets are not highly concentrated and are competitive (as discussed earlier), the affected exempt holding companies likely would lose business to other banks if they chose to increase loan rates significantly. Some studies have found evidence of a relationship between higher capital holdings and market share during and following banking crises. To avoid losing business to well-capitalized institutions, the affected holding companies likely would reduce the amount of risky assets to some extent or undertake other actions rather than attempting to pass the full cost of holding additional capital to select customers. Appendix II contains more information on our analysis of these types of scenarios. In general, our results for loan volumes and cost and availability of credit should be interpreted with caution because of the methodological and other limitations associated with our approach. For example, our estimates have wide confidence intervals suggesting considerable uncertainty in the results (see app. I for limitations). As such, considering our results in the context of a wider body of empirical literature is useful. Table 5 also includes the average impact on loan volumes and lending rates based on other studies combined with our calculation of the capital deficit stemming from the elimination of the capital exemption. The results from our model, although larger for both loan growth and lending spreads, are consistent with the average we calculated from a number of empirical studies examining the relationship between bank capital and lending activity. These studies represent a variety of methodologies, each with its own limitations. Nevertheless, even the largest estimate we identified in the literature still would imply a relatively small impact of the exemption on credit markets. Particular segments of the market may be affected more than others. For example, customers in agricultural real estate and agricultural production loan markets may experience impacts larger than those suggested by the aggregate analysis. Similarly, two of the exempt holding companies have a significant presence in the western states, while another has a significant presence on the East Coast. While the impact on the price and quantity of credit available may vary across regions, modeling limitations restrict our ability to estimate potential regional differences. Such regional impacts should be mitigated to a significant extent by the national nature of many loan markets. This analysis becomes much more complicated and uncertain once consideration is given to the impact of the various provisions of the Dodd-Frank Act and Basel III, which may result in a large number of institutions looking to replace and raise capital if banks seek to exceed the new regulatory minimums by the same margin they exceed them now. However, our results indicate that the elimination of the capital exemption would add minimally, if at all, to the cumulative economic impacts of these regulations. Market participants expressed uncertainty about how changes in capital requirements might affect the competitiveness of U.S. banks operating abroad, partly because the international regulatory landscape remains unsettled. The largest internationally active U.S. banks derive a significant portion of their revenues from their operations abroad and are subject to multiple regulatory regimes. Regulatory capital requirements have become more stringent globally with the goal of reducing bank failures and creating a more stable financial system. However, bank officials we contacted were uncertain how changes in capital requirements might affect their competitiveness abroad and were monitoring U.S. and international reforms closely to assess any impact on their cost of capital, lending ability, and business competitiveness. They were concerned that fragmented or conflicting regulations might restrict banks’ ability to use capital efficiently. Some U.S. banks believed that they might be at a competitive disadvantage to the extent that U.S. banks would be subject to higher capital requirements than banks from other countries. Finally, as major regulatory changes stemming from the Dodd-Frank Act, Basel III, and country-specific reforms are finalized and implemented, many U.S. bank officials we interviewed expressed concerns about the added costs of compliance with multiple regulatory regimes. The largest internationally active U.S. banks maintained a strong presence in major foreign markets, where they derived close to one-third of their revenues on average in 2010 (see fig. 3). One of the largest internationally active U.S. banks derived close to 60 percent of its total revenues from foreign operations in 2010. In the last 3 years, revenues from foreign operations, although varying by bank and geographical region, have decreased slightly on a percentage basis. Generally, the largest internationally active U.S. banks divide their operations into the following four geographical regions: (1) North America; (2) Europe, the Middle East, and Africa; (3) Asia or Asia/Pacific; and (4) Latin America or Latin America/Caribbean. As figure 4 shows, Europe, the Middle East, and Africa provided the biggest share (about 50 percent) of all foreign revenues. Revenues from the Asian and Pacific countries accounted for about 30 percent of foreign revenues compared with approximately 19 percent from Latin America. The large internationally active U.S. banks compete with large foreign- based banks and other internationally active U.S. banks across various product and geographic markets. Internationally active U.S. banks have varying lines of business. Although some focus on wholesale activities, one (Citigroup) is engaged in retail banking activities in more than 100 countries. In wholesale markets, some U.S. banks, like JPMorgan Chase and Bank of America, are active in making commercial and industrial loans, while others, like Goldman Sachs and Morgan Stanley, hold a larger percentage of their assets as trading assets and engage in market making and trading in securities and derivative instruments. One of the largest internationally active U.S. banks, Bank of New York Mellon, primarily provides custody and asset management services and securities servicing. In this capacity, it competes with the largest U.S. banks and foreign-based banks that provide trust as well as banking and brokerage services to high-net-worth clients. In the wake of the 2007-2009 financial crisis, international jurisdictions have pursued more stringent capital requirements, and large, internationally active U.S. banks will be subject to the regulatory requirements of various foreign regulators. For example, in Europe, large internationally active U.S. banks will be subject to major new regulations, including those created by the Basel Committee and the European Commission. The G-20 countries, which include the United States, adopted the Basel III agreements in November 2010, and the individual countries are responsible for incorporating the new agreements into national laws and regulations. On July 20, 2011, the European Commission published a legislative proposal known generally as Capital Requirement Directive 4 (CRD4) to implement the proposals of Basel III into European Union law. The commission staff we spoke with indicated that there are many legislative initiatives at the European Union level that could affect U.S. internationally active banks operating in Europe, but some key ones, in addition to CRD4, are the Capital Requirement Directive 3 (CRD3) and the Crisis Management Directive. CRD3 puts in place stricter capital requirements, some of which became effective at the end of 2011. Among other things, CRD3 requires banks to implement remuneration policies that are consistent with their long-term financial results and do not encourage excessive risk taking. For example, at least 40 percent of bonuses must be deferred 3-5 years and at least 50 percent must consist of equity or equity-like instruments or long-dated instruments that are convertible into tier 1 capital during emergency situations. The Crisis Management Directive will set out the different tools for resolutions of bank failures in Europe. It principally aims to provide the authorities with tools and powers to intervene in banks at a sufficiently early stage and is due to be adopted formally in November 2011. This resolution authority also will apply to the European subsidiaries of U.S. banks. In addition to the European Union regulatory initiatives, individual countries plan to implement additional measures. For example, the United Kingdom (UK) independently introduced a permanent levy on banks’ balance sheets on January 1, 2011, to encourage banks to move to less- risky funding profiles, according to the UK’s Her Majesty’s Treasury. The levy applies to some UK banks, building societies, and UK operations of foreign banks with more than £20 billion in liabilities. The rate for 2011 will be 0.05 percent, and it will rise to 0.075 percent in 2012. In June 2010, France and Germany agreed to similar measures and have been enacting them. U.S. financial regulators and market participants have expressed concern about the extent to which the capital requirements and other financial regulations resulting from Basel III could be harmonized across national jurisdictions and how consistently they would be enforced. For example, U.S. regulators noted that the supervisory standard for how banks measure risk-weighted assets—the basis for regulatory capital ratios— under Basel III could be more transparent. In June 2011, the FDIC Chairman stated that European banks continued to in effect set their own capital requirements using banks’ internal risk estimates—with risk-based capital determined by bank management assumptions, unconstrained by any objective hard limits and no leverage constraints. Other, foreign regulators also stated that international differences in the calculation of risk-weighted assets could result from assigning inconsistent risk weights on the same types of assets and could undermine Basel III. Some foreign banks we interviewed told us that comparing risk-weighted assets across banks was challenging because of differing reporting, legal, and accounting frameworks. For example, comparisons of institutions from the United States with those from the European Union are difficult because U.S. banks still are transitioning from Basel I to Basel II and do not publicly report Basel II risk-based capital requirements. Conversely, banks in the European Union are operating under Basel II and are publicly reporting their risk-based capital ratios. Additionally, U.S. regulators noted the potential for adverse competitive effects on banks with overseas operations from a Basel III provision for reciprocal countercyclical buffers. “excessive credit growth” declaration (that is, identifies a “bubble” when excess aggregate credit growth is judged to be associated with a buildup of systemwide risk), then all banks operating in that country would have to meet higher capital requirements. Regulators in other countries also could require banks operating in their countries to hold proportionately higher capital. For example, a U.S. bank operating in multiple countries would be subject to the cumulative effect of each country’s additional requirements in times of excess aggregate credit growth, and U.S. banking regulators would have no say over these declarations. The Basel Committee has expressed concern that the financial regulatory framework did not provide adequate incentives for firms to mitigate their procyclical use of leverage (debt). That is, firms tended to increase leverage in strong markets and decrease it when market conditions deteriorated, amplifying business cycle fluctuations and exacerbating financial instability. According to regulators, many financial institutions did not increase regulatory capital and other loss-absorbing buffers during the market upswing, when it would have been easier and less costly to do so. The Basel III countercyclical buffers are intended to help address concerns about procyclicality. However, other factors may help ease concerns about inconsistent implementation of financial regulations. U.S. regulatory officials have observed that a high level of coordination among international regulators would help ensure that banks hold significantly more capital, that the capital will truly be able to absorb losses of a magnitude associated with the crisis without recourse to taxpayer support, and that the level and definition of capital will be uniform across borders. In addition, the quantity and quality of capital held by the largest internationally active U.S. and foreign banks has increased significantly in the past few years. Specifically, among the 50 largest global banks, tier 1 capital adequacy ratios have climbed from 8.1 percent in 2007 to 11.3 percent at the end of 2010. Since the end of 2008, the 19 largest bank holding companies in the United States that were subjected to stress tests increased common equity by more than $300 billion. Furthermore, European banks raised $121 billion in capital since Europe’s June 2010 stress test exercise. In addition to the specific concerns related to the implementation of Basel III, both U.S. and foreign bank officials we interviewed told us that they were concerned that fragmented or conflicting regulations in the United States and other jurisdictions might restrict banks’ ability to use capital efficiently. According to U.S. and foreign bank officials, inconsistent capital requirements among multiple regulatory regimes may restrict banks’ ability to move capital across jurisdictions. For example, according to regulators and U.S. banks we interviewed, since the 2007-2009 financial crisis, foreign regulators have become more sensitive to how much capital foreign entities in their jurisdiction hold. Some foreign bank regulators have required banks to “ring fence” capital on the balance sheet as a way to protect and hold dedicated capital for that bank subsidiary in their legal jurisdiction in case of financial difficulties or bankruptcy. Foreign bank regulators were concerned that the parent company would reallocate capital in their jurisdiction to fund the parent company located outside of their jurisdiction, potentially resulting in the subsidiary being undercapitalized. According to some banks we interviewed ring fencing would be costly for banks operating abroad as it restricts capital and requires systems for keeping operations segregated across countries. In another example, U.S. bank officials noted that recent reforms have changed what types of capital instruments can be counted as tier 1 capital. As a result, U.S. banks may not have access to tax-efficient tier 1 instruments that foreign bank competitors can issue because of differences in national tax policies. Specifically, prior to the recent changes under the Dodd-Frank Act and Basel III, U.S. bank holding companies could issue tier 1 trust-preferred securities with dividend payments that were tax-deductible. With the exclusion of trust-preferred securities from tier 1, large internationally active banks likely will not have any tax-efficient alternative in the United States, while foreign banks in certain jurisdictions will continue to have access to certain capital instruments, such as noncumulative perpetual preferred shares, that confer some tax benefits because of local tax laws. As major regulatory changes stemming from the Dodd-Frank Act, Basel III, and country-specific reforms are finalized and implemented, many U.S. and foreign bank officials we interviewed expressed concerns about the added costs of compliance with multiple regulatory regimes. Because these regulations have not been implemented yet, how they may affect the operations of U.S. banks abroad is not known. For example, according to U.S. bank officials, they cannot yet estimate the cost associated with implementing and complying with the new risk-based capital and leverage requirements under Basel III. Moreover, implementation of key provisions of the Dodd-Frank Act and the new Basel III capital and liquidity requirements will be particularly challenging because of the number of related provisions that must be considered together. According to a testimony given by the Acting Comptroller of OCC, regulators have been trying to understand not only how individual provisions will affect the international competitiveness of U.S. firms, but also how the interactions of the various requirements of the Dodd-Frank Act and Basel III will affect U.S. firms domestically. According to testimony from an industry expert, areas other than the bank capital provisions of the Dodd-Frank Act can affect costs (including compliance costs and competition): prohibition of proprietary trading by banks, exclusion of the use of external credit ratings for determining risk weighting, regulations governing derivatives, the designation and regulation of SIFIs, and resolution of insolvent financial firms. For example, bank officials we interviewed told us that the Dodd-Frank Act’s exclusion of the use of external credit ratings for determining risk weighting will create additional costs to U.S. banks. The banks would have to develop their own methods for performing these calculations, potentially putting them at a competitive disadvantage (including higher cost) internationally because European banks could still use such credit ratings, which are widely understood and used by investors. U.S. bank officials also noted that they would incur increased administrative costs under multiple regulatory regimes as they would have to implement and comply with multiple capital ratios, including those for the U.S. and foreign jurisdictions. Many U.S. banks GAO interviewed expressed concerns about the added costs of compliance with multiple regulatory regimes and the impact of the Dodd-Frank Act on the global competitiveness of U.S. banks, but these concerns would need to be considered against the potential benefits of a safer and sounder financial system. We provided a draft of this report to the FDIC, the Federal Reserve and OCC for their review and comment. Each of the federal banking regulators provided technical comments that were incorporated in the report, as appropriate. We are sending copies of this report to appropriate congressional committees, FDIC, the Federal Reserve, OCC, and the Department of the Treasury, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2642 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. The objectives of the report were to examine (1) the regulation of foreign- owned intermediate holding companies in the United States, (2) the potential effects of changes in U.S. capital requirements on foreign- owned intermediate holding companies, and (3) banks’ views on the potential effects of changes in U.S. capital requirements on U.S.-owned banks operating abroad. This report focuses on intermediate holding companies owned by a foreign parent bank (that is, a foreign banking organization) and the largest internationally active U.S. banks based on their level of foreign business activity. The foreign parent bank may have its U.S. subsidiaries owned or controlled by an intermediate bank or thrift holding company in the United States (the organization between the subsidiary bank and the foreign parent bank) primarily to take advantage of tax or regulatory benefits. Under this corporate structure, the intermediate holding company represents the foreign parent bank’s top- tiered legal entity in the United States and is regulated by the Board of Governors of the Federal Reserve System (Federal Reserve). To describe how foreign holding companies are regulated and supervised in the United States, we reviewed relevant federal and state banking laws and regulations (such as the International Banking Act of 1978, Foreign Bank Supervision Enhancement Act of 1991, section 171 of the Dodd- Frank Wall Street Reform and Consumer Protection Act , and New York state banking law). We reviewed regulatory documents such as the Federal Reserve’s Consolidated Financial Statements for Bank Holding Companies—FR Y-9C. Further, we reviewed supervisory guidance such as Supervision and Regulation Letter 01-1 (the capital exemption), the final rule that establishes a floor for the risk-based capital requirements applicable to the largest internationally active banks, relevant published reports, testimonies, speeches, articles, and relevant prior GAO reports. We interviewed supervisory officials at the Federal Reserve, Federal Deposit Insurance Corporation (FDIC), Office of the Comptroller of the Currency (OCC), Office of Thrift Supervision (OTS), New York State Banking Department, and officials at the Department of the Treasury, the European Commission (a European Union entity that, among other things, through capital directives sets out general capital rules to be transferred into national law by each of the 27 European Union countries as they deem appropriate), foreign and U.S. bank holding companies, a foreign trade association, credit rating agencies, and law firms. In addition, we received written responses to questions from the European Banking Authority (European banking regulator) and attended a conference on the implications of new capital rules for foreign banks. To assess the potential effects of changes in capital requirements for foreign-owned intermediate holding companies, we reviewed section 171 of the Dodd-Frank Act, and proposed and final capital rules for foreign- owned intermediate holding companies and related comment letters. We reviewed various proposed and final international capital rules. We reviewed Securities and Exchange Commission (SEC) regulatory filings of foreign bank holding companies. We interviewed foreign bank regulators, foreign and U.S. bank holding companies, credit rating agencies, and industry experts on the effects of the new capital requirements on foreign banks operating in the United States. We also reviewed academic studies on the impact of higher capital requirements on the cost of capital and lending and obtained the views of foreign and domestic banks, credit rating agencies, and industry experts. To assess the extent to which credit markets are likely to be affected by removal of the capital exemption, we calculated market shares for each group of bank holding companies in loan markets as of December 31, 2010. We obtained balance sheet data for bank holding companies as of December 31, 2010, from SNL Financial, which reports data for bank holding companies based on forms FR Y-9C submitted to the Federal Reserve. In general, only top-tier bank holding companies with consolidated assets of $500 million or more are required to submit FR Y- 9Cs. To avoid double-counting bank holding companies that are subsidiaries of other bank holding companies, we obtained lists of second-tier bank holding companies as of December 31, 2010, from the Federal Reserve’s National Information Center website and used this list to drop any second-tier bank holding companies from our analysis. Our sample—our definition of the market—is thus the collection of top-tier bank holding companies with consolidated assets of $500 million or more that filed FR Y-9Cs with the Federal Reserve as of December 31, 2010. We obtained lists of all top-tier foreign-owned intermediate holding companies—both exempt and nonexempt—operating in the United States as of December 31, 2010, from the Federal Reserve. We used these lists to classify bank holding companies in our sample as one of three types: exempt foreign-owned intermediate holding companies, nonexempt foreign-owned intermediate holding companies, and U.S. bank holding companies. We calculated the percentage of various types of loans on the balance sheets of each group, including the following: total domestic loans and leases, nonresidential construction loans and all land development and other land loans, agricultural real estate loans, home equity lines of credit, first-lien residential mortgage loans, junior-lien residential mortgage loans, multifamily residential property loans, owner-occupied commercial real estate loans, nonowner-occupied commercial real estate loans, agricultural production loans, commercial and industrial loans, leases. We used amounts reported for domestic offices only so that our comparisons were consistent across foreign-owned intermediate holding companies and U.S. bank holding companies. A group’s market share is the total dollar value of loans on the balance sheets of all bank holding companies in the group as a percentage of the total dollar value of loans on the balance sheets of all bank holding companies in the market. To assess the extent to which the price of credit and the quantity of credit available are likely to be affected because of the removal of the capital exemption, we used the used the HHI to measure market concentration. The HHI is a key statistical indicator used to assess the market concentration and the potential for firms to exercise market power. The HHI reflects the number of firms in the market and each firm’s market share, and it is calculated by summing the squares of the market shares of each firm in the market. For example, a market consisting of four firms with market shares of 30 percent, 30 percent, 20 percent, and 20 percent has an HHI of 2,600 (900 + 900 + 400 + 400 = 2,600). The HHI ranges from 10,000 (if there is a single firm in the market) to a number approaching 0 (in the case of a perfectly competitive market). That is, higher values of the HHI indicate a more concentrated market. Department of Justice and Federal Trade Commission guidelines as of August 19, 2010, suggest that an HHI between 0 and 1,500 indicates that a market is not concentrated, an HHI between 1,500 and 2,500 indicates that a market is moderately concentrated, and an HHI greater than 2,500 indicates that a market is highly concentrated, although other factors also play a role in determining market concentration. We calculated the HHI for 2010 for each of the loan markets listed above. Each bank holding company is a separate firm in the market, and its market share is equal to the dollar value of loans on its balance sheet as a percentage of the total dollar value of loans on the balance sheets of all the bank holding companies in the market. We also calculated the HHI for 2010 for each loan market in alternative scenarios in which exempt holding companies cease making loans and transfer the loans on their balance sheets to bank holding companies that remain in the market. In the first scenario, exempt foreign-owned intermediate holding companies’ loans are distributed proportionally among remaining bank holding companies. In the second scenario, exempt foreign-owned intermediate holding companies’ loans are acquired by the largest remaining bank holding company in the market. A limitation of defining the market as the collection of top-tier bank holding companies that filed FR Y-9Cs with the Federal Reserve is that we exclude organizations that provide credit. For example, small bank holding companies—those with consolidated assets of less than $500 million—generally are not required to file form FR Y-9C. However, they do make loans. Other credit market participants include savings and loan holding companies, stand alone banks, savings and loan associations, credit unions, and finance companies not owned by bank holding companies. Capital markets are another source of funds for some borrowers. As a result, our estimates of market shares are likely overstated. Furthermore, our estimates of market concentration may be either understated or overstated, depending on the number and market shares of other credit providers. Another limitation of our analysis is that we implicitly assume that all loan markets are national in scope; that is, that credit provided by a bank holding company is available to any potential borrower, regardless of his or her respective geographic location. If loan markets are not national in scope, then our market share and market concentration estimates are unlikely to represent those that we would estimate for a specific subnational region, such as a state or metropolitan area. The market share and market concentration estimates for some regions likely would be greater than our national estimates, while others likely would be lower. For this analysis, we relied on the Federal Reserve’s FY-9C data that we obtained through SNL Financial and on information from the Federal Reserve on foreign banking organizations’ top-tier intermediate holding companies in the United States. We conducted reliability assessment on these data by reviewing factors such as timeliness, accuracy, and completeness. We also conducted electronic testing to identify missing and out-of-range data. Where applicable, we contacted officials from the Federal Reserve to address questions about the reliability of the information. We found the data to be sufficiently reliable for our purposes. To estimate the effect of capital ratios on the cost and availability of credit, we estimated a modified version of a vector autoregression (VAR) model commonly used in the macroeconomics and monetary literature. Our model closely follows Berrospide and Edge (2010) and Lown and Morgan (2006). The VAR consists of eight variables. The core variables that represent the macroeconomy are (1) real gross domestic product (GDP) growth, (2) GDP price inflation, (3) federal funds rate, and (4) commodity price index growth. As is pointed out in Lown and Morgan (2006), these four variables potentially make up a complete economy, with output, price, demand, and supply all represented. We capture the banking sector with four variables: (1) loan volume growth, (2) changes in lending spreads—commercial and industrial loan rates relative to a benchmark, (3) lending standards as measured by the net fraction of loan officers at commercial banks reporting a tightening of credit standards for commercial and industrial loans in the Federal Reserve’s Senior Loan Officer Opinion Survey, and (4) the aggregate capital-to-assets ratio for the commercial bank sector. The addition of the latter four variables allows us to investigate the dynamic interaction between banks and the macroeconomy. The data were assembled from Thomson-Reuters Datastream and the Federal Reserve. We have relied on these data in our past reports and consider them to be reliable for our purposes here. Using the estimated VAR system for the third quarter of 1990 to the second quarter of 2010, we trace the dynamic responses of loan volumes, lending spreads, and other macroeconomic variables to shocks to the bank capital ratio. As a result, we can obtain quantitative estimates of how bank “innovations” or “shocks” affect the cost and availability of credit. Our base results rely on impulse response functions using the following causal ordering of the variables: GDP, inflation, federal funds rate, commodity spot prices, loan volumes, capital ratio, loan spreads, and lending standards. However, our final estimates use the average of the outcomes for the two different orderings of the variables: (1) where the macro variables are given causal priority and (2) where the bank variables are given causal priority. The VAR model, and the innovation accounting framework, is laid out in greater detail in another GAO report. The VAR methodology, while containing some advantages over other modeling techniques, has particular limitations, and therefore the results should be interpreted with caution. First, the methodology potentially overstates the quantitative effects of shocks on the economy and can be difficult to interpret. Second, because the technique relies on past data, it is subject to the criticism that past information may not be useful for gauging future response due to policy changes. Third, to conduct meaningful assessments of the impacts of shocks to the system, causal priority is given to some variables over others. However, our results are not particularly sensitive to this ordering, although we do obtain smaller impacts of bank capital on lending activity with some alternative orderings. To minimize this limitation, our estimates are an average of a model where causal priority is given to the macroeconomic variables and a model where causal priority is given to the bank variables. Last, in our particular case the impulse response functions have wide confidence intervals, suggesting considerable uncertainty in the results. Despite these limitations, the VAR approach is considered to be a reasonable alternative to other types of models. However, it is prudent to evaluate our results in the context of the wider body of research on the effects of bank capital on lending activity. The studies we relied on for comparison are useful in that they represent a variety of different modeling techniques ranging from VAR and cross- sectional regression methodologies to more sophisticated dynamic stochastic general equilibrium (DSGE) modeling. None of these approaches are without limitations. For example DSGE models, although among the best for conducting counterfactual experiments and easy to interpret, are difficult to estimate and the techniques used to facilitate estimation can result in questionable results that are at odds with empirical observations. Nevertheless, by considering the body of evidence from different studies, we are able to provide some assessment of the reliability of our findings. However, the studies discussed in the report are included solely for research purposes and our reference to them does not imply we find them definitive. To describe U.S. banks operating abroad and their services, major customers, and competitors, we used information obtained from interviews with some of the largest internationally active U.S. banks. We also analyzed audited financial statements in the annual reports for relevant companies. We selected the six largest internationally active U.S. banks based on their level of foreign business activity. To identify banks’ views on the potential risks from changes in capital requirements on U.S. banks operating abroad, we interviewed officials from the three U.S. bank holding companies that engaged in significant international operations. We also interviewed officials from the European Commission—a European Union entity that, among other things, through capital directives sets out general capital rules to be transferred into national law by each of the 27 European Union countries as they deem appropriate. We summarized relevant academic literature and regulatory studies and congressional testimonies on the potential effects on U.S. banks’ funding costs, product pricing, and lending activity abroad. We also obtained the views of federal banking officials from the Federal Reserve, FDIC, OCC, and OTS, and officials from the Department of the Treasury. We conducted this performance audit from December 2010 to January 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Bank holding companies can take different approaches to comply with the new capital requirement in the Dodd-Frank Wall Street Reform and Consumer Protection Act. From 2001 to 2010, the Board of Governors of the Federal Reserve System granted capital requirement exemptions to six foreign-owned intermediate holding companies provided that the companies satisfied certain conditions, including having well-capitalized foreign parent banks. As of the end of 2010, four foreign-owned intermediate holding companies continued to rely on a capital exemption from the Federal Reserve. The Dodd-Frank Act eliminated this exemption, and these exempt holding companies must now meet new capital requirements. Some of these exempt holding companies may choose to raise capital, while others may choose to deleverage by decreasing the risk-weighted assets on their balance sheets (or a combination thereof). Although predicting the responses of the exempt holding companies to the higher U.S. bank capital requirements is a complex proposition, this appendix illustrates the potential effect on the availability of credit if the three exempt holding companies respond by reducing their balance sheets. If the exempt holding companies chose to reduce their balance sheets to meet new capital regulations, we estimate that the decrease would be small relative to the aggregate assets for the U.S. banking sector. As table 6 illustrates, the three exempt holding companies would need to decrease their risk-weighted assets by amounts ranging from $12.2 billion to as much as $15.3 billion to meet the minimum capital requirements under the Dodd-Frank Act.at the individual holding company level, it is small as a percentage of the total risk-weighted assets of the U.S. banking sector (see table 6). For example, although the exempt holding companies would have to reduce their balance sheets by 20 percent on average, the total decline in assets amounts to 0.44 percent of the $9.1 trillion in total risk-weighted assets for the aggregate U.S. banking sector. To meet the equivalent of the well- capitalized standards that apply to banks and thrifts, the exempt holding companies would need to reduce their risk-weighted assets by $65.8 billion, or roughly 0.7 percent of the total risk-weighted assets for the aggregate U.S. banking sector. This would require two of the exempt holding companies to decrease risk-weighted assets by roughly 38 percent and 34 percent, respectively. In addition to the contact listed above, Daniel Garcia-Diaz (Acting Director), Rachel DeMarcus, M’Baye Diagne, Lawrance Evans Jr., Colin Gray, Joe Hunter, Elizabeth Jimenez, Courtney LaFountain, Akiko Ohnuma, Marc Molino, Timothy Mooney, Patricia Moye, Michael Pahr, and Barbara Roesmann made key contributions to this report.
|
During the 2007-2009 financial crisis, many U.S. and international financial institutions lacked capital of sufficient quality and quantity to absorb substantial losses. In 2010, the Dodd-Frank Wall Street Reform and Consumer Protection Act (the Dodd-Frank Act) introduced new minimum capital requirements for bank and savings and loan (thrift) holding companiesincluding intermediate holding companies of foreign banks. Intermediate holding companies are the entities located between foreign parent banks and their U.S. subsidiary banks. These companies held about 9 percent of total U.S. bank holding companies assets as of September 2011. The Dodd-Frank Act also required GAO to examine (1) regulation of foreign-owned intermediate holding companies in the United States, (2) potential effects of changes in U.S. capital requirements on foreign-owned intermediate holding companies, and (3) banks views on the potential effects of changes in U.S. capital requirements on U.S. banks operating abroad. To conduct this work, GAO reviewed legal, regulatory, and academic documents; analyzed bank financial data; and interviewed regulatory and banking officials and market participants. GAO makes no recommendations in this report. GAO provided a draft to the federal banking regulators (Federal Reserve, Federal Deposit Insurance Corporation and Office of the Comptroller of the Currency) for their review and comment. They provided technical comments that were incorporated, as appropriate. Foreign-owned intermediate holding companies can engage in the same activities as and generally are regulated similarly to their U.S. counterparts. The Board of Governors of the Federal Reserve System (Federal Reserve) oversees the regulation, supervision, and examination of foreign and U.S. bank and thrift holding companies. As of the end of 2010, four qualifying foreign-owned intermediate holding companies (exempt holding companies) were relying on a capital exemption, which allowed them to operate with significantly lower capital than U.S. peers. Federal Reserve officials noted that allowing capital to be held at the foreign parent bank (consolidated) level was consistent with its supervision for U.S. bank holding companies and met international standards for home-host supervision. The Dodd-Frank Act eliminated the capital exemption in order to enhance equal treatment of U.S.- and foreign-owned holding companies by requiring both types of companies to hold similar capital levels in the United States. As a result, these exempt holding companies must meet minimum capital standards that are not less than those applicable to Federal Deposit Insurance Corporation-insured depository institutions by July 2015. The four exempt holding companies have been considering various actions to comply with new capital requirements, and the effects of eliminating the capital exemption on competition and credit cost and availability likely would be small. Specifically, these companies are considering raising capital, decreasing their holdings of risky assets, restructuring, or adopting a combination of these actions. GAOs analysis of loan markets suggests that the elimination of the capital exemption likely would have a limited effect on the price and quantity of credit available because the affected banks have relatively small shares of U.S. loan markets, which are competitive. These four companies accounted for about 3.1 percent of the loans on the balance sheets of all bank holding companies in the United States as of year end 2010. In addition, GAOs review of the academic literature and econometric analysis both suggest that changes in capital rules that affect the exempt companies would have a limited effect on loan volumes and the cost of credit and add minimally to the cumulative cost of new financial regulations. Although the impact on the price and quantity of credit available may vary across regions, modeling limitations restricted GAOs ability to identify regional differences. Market participants expressed uncertainty about how changes in capital requirements might affect the competitiveness of U.S. banks operating abroad, partly because international regulatory capital requirements have yet to be implemented. The largest internationally active U.S. banks derived about one-third of their 2010 revenues from operations abroad. They face a variety of domestic and foreign competitors and are subject to multiple regulatory regimes. Bank officials expressed uncertainty about how changes in capital requirements will affect their cost of capital, lending ability, and competitiveness. Furthermore, they were concerned that fragmented or conflicting regulations across national jurisdictions might restrict banks ability to use capital efficiently. Many U.S. banks GAO interviewed expressed concerns about the added costs of compliance with multiple regulatory regimes and the impact of the Act on the global competitiveness of U.S. banks, but these concerns would need to be considered against the potential benefits of a safer and sounder financial system.
|
While PPACA gives CMS discretion in how to implement the Innovation Center, such as the composition of its staff, the law also established certain requirements for the center. For example, PPACA requires that, in carrying out its duties described in the law, the Innovation Center consult with representatives of relevant federal agencies and clinical and analytical experts with expertise in medicine or health care management. It also requires that, of amounts appropriated to the center, the center make no less than $25 million available for model implementation each fiscal year starting in 2011. In addition, PPACA requires that the Innovation Center evaluate each model to measure its effects on spending and quality of care, and that these evaluations be made public. Further, PPACA requires the Innovation Center to modify or terminate a model any time after testing and evaluation has begun unless it determines that the model either improves quality of care without increasing spending levels, reduces spending without reducing quality, or both. In addition to these requirements, when selecting models, PPACA requires the Innovation Center to determine that a model addresses a situation in which deficits in care were leading to poor clinical outcomes or unnecessary spending. The law also describes types of models that the Innovation Center could consider in selecting models to test; however the center is not limited to this list. Examples of model types include changing the way primary care providers are reimbursed for services and improving care for patients recently discharged from the hospital. PPACA also directs that in selecting models, the Innovation Center give preference to those that improve the coordination, quality, and efficiency of health care services and lists additional factors for consideration, such as whether the model uses certain technology to help achieve its goals. Finally, PPACA also makes certain requirements not applicable to models tested under the provision establishing the Innovation Center that were applicable to demonstrations CMS has frequently conducted in the past. For example, while prior demonstrations generally required legislation in order to be expanded, PPACA allows CMS to expand Innovation Center models more broadly into Medicare or Medicaid—including on a nationwide basis—through the rulemaking process if the following conditions are met: (1) the agency determines that the expansion is expected to reduce spending without reducing the quality of care or improve quality without increasing spending, (2) CMS’s Office of the Actuary certifies that the expansion will reduce or not increase net spending, and (3) the agency determines that the expansion would not deny or limit coverage or benefits for beneficiaries. In addition, PPACA makes inapplicable certain requirements that have previously been cited as administrative barriers to the timely completion of demonstrations. Specifically, PPACA provides the following: HHS cannot require that an Innovation Center model be budget neutral, that is, designed so that estimated federal expenditures under the model are expected to be no more than they would have been without the model, prior to approving a model for testing. Certain CMS actions in testing and expanding Innovation Center models cannot be subject to administrative or judicial review. For example, the selection of models for testing or expansion is not subject to review by the agency or the courts. The Paperwork Reduction Act does not apply to Innovation Center models. Under the Paperwork Reduction Act, agencies generally are required to submit all proposed information collections to the Office of Management and Budget (OMB) for approval and provide a 60-day period for public comment on collections, among other things, when they want to collect data on 10 or more individuals. From the time it became operational in November 2010, through March 31, 2012, the Innovation Center’s activities and use of funding focused on implementing 17 new models to test different approaches to health care delivery and payment in Medicare and Medicaid. During this period, the Innovation Center hired and organized staff into groups to implement models and to provide for the key functions that support model implementation. From the time it became operational in November 2010, through March 31, 2012, the Innovation Center announced the implementation of 17 new models designed to test different approaches to health care delivery and payment in Medicare and Medicaid. These models generally fall into three different types on the basis of the delivery and payment approaches tested. The center’s “Patient care” models test approaches that are designed around improving care for clinical groups of patients such as patients needing heart bypass surgery. “Seamless care” models test approaches designed to improve coordination of care for a patient population across care settings, such as the coordination of inpatient and outpatient care for all of a provider’s Medicare beneficiaries. “Preventive care” models test approaches designed to improve health, such as incentive programs to prevent smoking. The 17 models vary by the program and beneficiaries targeted. For example, some target Medicare or Medicaid beneficiaries specifically, whereas others are open to beneficiaries of either program. In addition, three models have been designed to target individuals who are covered by both Medicare and Medicaid. The models also vary in terms of the types of participants involved, ranging, for example, from physician group practices to Federally Qualified Health Centers, to health plans, to state Medicaid programs. Of these 17 models, 11 were selected by the Innovation Center under the PPACA provision that established the center and, as a result, certain requirements that have applied to demonstrations CMS has frequently conducted in the past are not applicable to these models. The Innovation Center selected the 11 models for implementation by reviewing model types identified in PPACA and ideas submitted by CMS staff as well as through a variety of mechanisms designed to obtain ideas from beneficiaries, providers, payers, state policymakers and others. Selection criteria—which are available to the public on the Innovation Center’s website—include focusing on health conditions that offer the greatest opportunity to improve care and reduce costs, and meeting the needs of the high-admission-rate hospitals most vulnerable populations. The remaining six new models the Innovation Center is implementing were specifically required by other PPACA provisions. For example, the center is implementing a model required by PPACA that tests whether partnerships between and community-based organizations can improve transition care services for Medicare beneficiaries. The degree of flexibility that the Innovation Center has in implementing these six models varies by each model’s specific statutory authority. These mechanisms included the Innovation Center’s online web program and “listening session” meetings held across the country in 2010. The Innovation Center projects that the total funding required to test and evaluate these 17 models will be $3.7 billion over their lifetime, including $2.7 billion for the 11 models selected by the Innovation Center and $1.0 billion for the 6 models specifically required by other provisions of PPACA. The expected funding for individual models ranges from $30 million to $931 million, depending on model scope and design. Officials said that the period required to test and evaluate an individual model typically ranges from 3 to 5 years. With regard to the Innovation Center’s annual expenditures, as of March 31, 2012, the Innovation Center forecast that most of its fiscal year 2012 budget—or 76.8 percent—would be spent implementing the 11 models that were selected for implementation by the Innovation Center. Table 1 provides funding information on the 17 Innovation Center models, including total funding for models over their lifetime, by model type. Appendix I provides additional information about individual models. As of August 1, 2012, the Innovation Center was still relatively early in the process of implementing the 17 models. CMS officials explained that this process includes a series of steps to develop and prepare the model for testing followed by a testing and evaluation period that is typically 3 to 5 years in which, among other things, participants and CMS put specified changes to health care delivery or payment into effect. (See sidebar.) While the Innovation Center had started testing 12 of the 17 models as of August 1, 2012, nearly all of these tests had started within the prior 12 months, and 5 had started within the prior 6 months. Thus, the models still have a significant portion of their testing and evaluation period remaining. In addition, for the 5 models that had not yet started testing, the Innovation Center was still completing the steps necessary to start testing. Appendix II provides additional information about the general process used to implement models. In addition to the 17 models, the Innovation Center also assumed responsibility for 20 demonstrations that were initiated prior to the Innovation Center’s formation. Responsibility for the demonstrations was moved to the Innovation Center in March 2011, when the demonstration and research and evaluation groups of CMS’s former Office for Research, Development and Information (ORDI) were brought into the Innovation Center through reorganization. As of August 1, 2012, testing of 9 of these 20 demonstrations had ended, although evaluation activities were still ongoing for 4 of them. The demonstrations were initiated under the Medicare Health Care Quality Demonstration Program which enables CMS to select which demonstrations to conduct, or because they were specifically required by various pre-PPACA statutes. Like the Innovation Center’s models, the demonstrations test a range of delivery and payment approaches; for example, one demonstration tests the use of care management—a particular approach to coordinating and managing health services—for high-cost Medicare beneficiaries while another tests approaches for preventing and treating cancer among minorities in Medicare. As of March 31, 2012, the Innovation Center’s 184 staff were organized into nine groups and the Office of the Director. Four of the nine groups are generally responsible for coordinating the implementation of models. Three of these four groups—the Patient Care Models, Seamless Care Models, and Preventive Care Models Groups—focus on models selected by the Innovation Center under the PPACA provision that established the The Medicare Demonstrations group is generally responsible for center.implementing models specifically required by other PPACA provisions as well as the CMS demonstrations that existed prior to the establishment of the Innovation Center. Staff in these four groups coordinate planning, develop model designs, and obtain approval for their models from CMS and HHS. Once a model is approved, staff in these groups coordinate the remaining implementation steps, including soliciting and selecting participants and overseeing the model during the testing and evaluation period. The remaining five groups have primary responsibility for key functions that support model implementation. The Policy and Programs Group reviews ideas submitted for consideration as possible models and seeks to ensure a balanced portfolio of different types of models. The Rapid Cycle Evaluation Group is responsible for evaluation of models, including collecting data on and providing feedback to model participants about their performance. The Learning and Diffusion Group facilitates learning within models and disseminates the lessons learned across models so that participants can benefit from the experiences of other models. The Stakeholder Engagement Group conducts outreach to potential stakeholders to gain support and solicit ideas for innovative models, as well as outreach to potential participants—such as physician groups and hospitals—to inform them of the opportunity to participate in models. The Business Services Group coordinates with other CMS centers and offices to provide administrative and business support to the Innovation Center in areas such as budgeting, contracting, and project management. CMS officials explained that the 184 staff hired between the time the Innovation Center became operational in November 2010, and March 31, 2012, were distributed across the Office of the Director and the nine groups in part because of an initial need for expertise with certain model types and certain key functions. For example, because most of the models that the Innovation Center selected for implementation were Patient Care and Seamless Care Models, more staff were hired in those Similarly, the Rapid groups than in the Preventive Care Models Group.Cycle Evaluation Group and the Business Services Group were among the largest groups by staff size because of (1) the Innovation Center’s need for evaluation expertise when selecting which models to test as well as its responsibility for evaluating existing demonstrations and (2) the need for staff to carry out key administrative activities right away, including contract solicitation, budget development, and hiring. Because the Innovation Center assumed responsibility for prior CMS demonstrations, staff from ORDI, which was responsible for implementing the demonstrations, were reassigned to the Innovation Center to form the Medicare Demonstrations Group and part of the Rapid Cycle Evaluation Group. Table 2 provides information on the staff size for each group in the Innovation Center as of March 31, 2012. CMS officials explained that initial hiring of staff also reflected other needs such as the need for rapid recruitment, the need to balance the number of staff with expertise in CMS policies and procedures with staff who had experience in the private sector, and the need for leadership to guide the development of the new center’s activities. Rapid recruitment: Approximately 40 percent of the staff working in the Innovation Center as of March 31, 2012, was brought on board within the first 5 months from when it became operational in November 2010. In order to help the center get started quickly, CMS gave the Innovation Center authority to hire staff directly until March 31, 2011, after which it followed standard hiring procedures. Of the 184 staff in CMMI as of March 31, 2012, 64 had been hired through the Innovation Center’s direct-hire authority. Balancing the need for CMS expertise with expertise in the private sector: CMS officials said the Innovation Center sought a balance of staff who had expertise with CMS policies and procedures and staff from outside of the agency in the private sector. Of the staff on board as of March 31, 2012, about 54 percent were reassignments from within CMS, while about 46 percent were new hires from outside of the agency, and officials explained that most of these were from the private sector. Leadership: During its first year, CMS officials said the center sought to build its leadership. When compared with data for CMS as a whole for 2011, the distribution of the center’s staff as of March 31, 2012, shows a higher percentage of Innovation Center staff at the General Schedule (GS)-15 employment level, which is one of the higher management levels. Specifically, 23.4 percent of the Innovation Center’s staff were in the GS-15 level, compared with 11.5 percent for CMS as a whole. At the same time, the proportion of staff at other upper levels, including the Senior Executive Service level, in the Innovation Center was similar to that of CMS as a whole. Table 3 provides information about Innovation Center staff by employment level as of March 31, 2012. CMS officials said that the Innovation Center plans to hire additional staff with an emphasis on hiring into the three groups—the Seamless Care, Patient Care, and Preventive Care Models groups—that focus on models selected by the Innovation Center. Officials told us that the center’s goal is to have a total of 338 staff and noted that, compared to initial hiring, which focused on staff at leadership levels, future hiring will emphasize lower GS levels. The Innovation Center’s plans for evaluating its models include identifying measures related to the cost and quality of care and hiring contractors to evaluate the models. The Innovation Center’s plans for evaluating its own performance include aggregating data on cost and quality measures to determine the overall impact of the center and monitoring its progress implementing models. As part of its evaluation of individual models, the Innovation Center plans to identify measures related to the cost and quality of care. CMS officials said that, as of August 1, 2012, the Innovation Center had developed preliminary evaluation plans for each of the 17 models being implemented. In these plans, the center has identified preliminary cost and quality measures to be used to evaluate the 17 models. According to CMS officials, in identifying the preliminary measures, they generally selected cost and quality measures that were well accepted in the health care industry, including those developed or endorsed by national organizations, such as the National Quality Forum and the Agency for Healthcare Research and Quality.measures for which data sources were readily available, such as claims data and standard patient surveys conducted by providers. Officials said that they also identified The preliminary cost and quality measures the Innovation Center identified vary for different models. For example, preliminary cost measures include the average total cost of care per Medicare beneficiary per year and the cost per hospitalization and related outpatient care and subsequent hospitalizations for certain types of conditions. In the case of quality, preliminary measures identified by the Innovation Center vary by the type of care involved, such as the percentage of patients whose blood pressure exceeds a certain level (primary care); newborn birth-weight (prenatal care); and the number of adverse events, such as hospital- acquired infections (hospital care). See table 4 for examples of preliminary measures identified by the Innovation Center and intended for use for different types of care. Preliminary measures the Innovation Center identifies will be finalized with contractors responsible for evaluating models on behalf of CMS. According to CMS officials, the Innovation Center plans on hiring contractors to evaluate its models. The Innovation Center uses its preliminary evaluation plans as the basis for developing solicitations for and selecting contractors, who will be asked to propose specific evaluation approaches. Officials said that after contracts are awarded, the Innovation Center goes through a “design phase” with the contractor where they reach agreement on the final evaluation plan, including the measures of cost and quality of care that will be used. As of August 1, 2012, the Innovation Center had contracted with evaluators for 10 of the 17 models and had finalized measures for 2 models.anticipated awarding contracts for 6 of the remaining models by the end of fiscal year 2012 and for the other remaining model—the Strong Start for Mothers and Newborns model—by March 2013. Officials told us that comparison groups will be matched to model participants along a variety of measurable dimensions, such as provider and market-specific characteristics, and that particular care will be taken to identify the impact of each reform in the context of other models or interventions. Officials also told us that in certain cases, it may not be possible to develop comparison groups for models. In these cases, the center will compare cost and quality outcomes for model participants before and after the start of the model. or that it has increased costs and should be discontinued. Alternatively, there may also be cases where the results at the end of the testing and evaluation period show that a model saves money but not at the threshold of statistical significance set by the Innovation Center. CMS officials told us that impact assessments will be ongoing, but will not begin until a model has been under way for the amount of time expected for the change in health care delivery or payment to start producing results.Officials said that they received data for their first impact assessment on August 31, 2012, although they emphasized that early impact assessments may not show clear results. As a complement to assessing the impact of models on the cost and quality of care, evaluation contractors will be asked to conduct site visits and interviews to obtain qualitative information about the different strategies participants may use to deliver care under each model. For example, for models that seek to incentivize better coordination of care, participants may implement different strategies to support care coordination, such as increasing staffing or investing in technology. Contractors will analyze whether different strategies are associated with particular cost and quality outcomes. Innovation Center officials told us that information collected by contractors will also be shared on a regular basis with model participants. The purpose of what the center refers to as “rapid cycle” feedback is to provide timely information so that participants can make improvements during the testing period of the model. For example, CMS officials explained that under the Federally Qualified Health Center Advanced Primary Care Practice model, participating health centers will be provided with feedback reports on a quarterly basis. According to officials, these reports will describe how each participant is performing relative to others with respect to the model’s measures. The reports, officials say, will also include information on differences among participants in how they are delivering care under the model in order to encourage the adoption of more-successful strategies. Officials told us that rapid cycle feedback will generally begin within the first year after testing of a model has started. As of August 1, 2012, the Innovation Center had started rapid cycle feedback for 1 of the 17 models—the Partnership for Patients model. The Innovation Center’s plans for evaluating its own performance include aggregating data on cost and quality measures to determine the overall impact of the center. To do this, the Innovation Center will use a set of core measures. The center has identified about 70 core measures, including some of the preliminary cost and quality measures related to the 17 models it was implementing as of March 31, 2012. Because not all core measures will apply to all models, data will be aggregated for groups of models. To conduct this aggregation, the Innovation Center will use statistical techniques, such as meta-analysis. Aggregation will not occur until individual models have been evaluated, but officials said that the Innovation Center has started asking evaluation contractors to consider using the 70 measures when possible. The Innovation Center’s plans for evaluating its performance also include monitoring its progress in implementing models. The Innovation Center has established a project management approach for its models that includes standard milestones—such as “completion of OMB clearance” and “issuance of participant solicitation and application”—that it uses to track the progress of models against target deadlines. In addition, certain data are monitored for each model against specified targets, such as the number of applications submitted and the number of participants selected. Individual milestones and data are summarized across all of the Innovation Center models every 2 weeks. The intended purpose is to allow the center’s management to monitor progress across models and to identify and promptly address potential delays. According to CMS officials, the Innovation Center was monitoring the progress of each of the 17 models it was implementing as of March 31, 2012. Finally, in order to help evaluate its performance, in June 2012, the Innovation Center contracted with a firm to review the Innovation Center’s internal operations and how the center operates within the context of CMS’s programs overall. The statement of work for this contract identified a number of objectives, including recommending ways to improve the center’s organizational structure, revising the center’s management policies and procedures, and identifying additional ways to evaluate the Innovation Center’s performance on an ongoing basis. To support these objectives, the contract requires the firm to, for example, identify best practices for expanding innovative models of care into ongoing programs such as Medicare and Medicaid. The contract also requires the firm to identify policies and procedures that are missing within the Innovation Center that would improve its performance. The evaluation under this contract is expected to be completed in November 2012. In our review of models the Innovation Center was implementing as of March 31, 2012, we identified three key examples of overlap with efforts of other CMS offices. While the center uses a number of mechanisms to coordinate with other CMS offices, it is still working on ways to make coordination more systematic. We identified three key examples of Innovation Center models being implemented as of March 31, 2012, that overlap with efforts of other CMS offices, meaning that the efforts share similar goals, engage in similar activities or strategies to achieve these goals, or target similar populations. However, these overlapping efforts also have differences, and CMS officials said they are intended to be complementary to each other. The three key examples we identified are the following: The Innovation Center’s Two Accountable Care Organization (ACO) Models and the Center for Medicare’s Shared Savings Program. The Innovation Center is implementing two models—the Pioneer ACO model and the Advance Payment ACO model—that share similar goals with those of the Shared Savings Program, which is required by PPACA and administered nationally by CMS through its Center for Medicare. All three efforts aim to encourage Medicare providers that participate in ACOs to improve the quality of care among the patients they serve, while at the same time reducing Medicare expenditures. In order to achieve these goals, the efforts provide financial incentives for ACOs that meet specified quality of care and cost savings thresholds by allowing them to share in a certain amount of the savings they achieve for the Medicare program. However, the Innovation Center’s models and the Shared Savings Program each adopt a different approach to sharing any realized savings. Further, while the Shared Savings Program is open to all eligible ACOs, the models target specific subgroups of ACOs. According to CMS officials, the Innovation Center’s ACO models are intended to be complementary to the Shared Savings Program, because they allow CMS to test alternative approaches to the national effort. If these alternative approaches are proven effective, officials explained, they could be incorporated into the Shared Savings Program. The Innovation Center’s Medicaid Models and CMCS’s State Medicaid Demonstrations. As of March 31, 2012, the Innovation Center was implementing nine models that share the same broad goal as the state Medicaid section 1115 demonstrations overseen by CMCS—testing new ways of delivering and paying for health care in Medicaid. Despite this similarity, the Innovation Center’s models can test delivery and payment approaches across geographic areas and with different types of participants, including directly with providers, while Medicaid demonstrations under CMCS are agreements between CMS and state Medicaid agencies to test approaches within a particular state. According to CMS officials, the Medicaid models and demonstrations are intended to be complementary: the models allow CMS to test the effectiveness of approaches it selects, while the demonstrations are initiated by states on the basis of their own priorities and needs. Further, officials said that while evaluations of Innovation Center models may be able to more-rigorously test effectiveness, state Medicaid demonstrations allow for a larger number of tests to be conducted—according to CMS, there were approximately 70 active section 1115 demonstrations as of August 2012—and can point to promising approaches that should be considered for further testing. The Innovation Center’s Partnership for Patients Model and CCSQ’s Quality Improvement Organization (QIO) Program. The goals of the Innovation Center’s Partnership for Patients model—namely reducing the rate of preventable hospital-acquired conditions and 30-day hospital readmissions—are also currently among the many goals of CCSQ’s QIO program. In order to achieve these goals, both the Partnership for Patients model and the QIO program contract with organizations—Hospital Engagement Networks (HEN) and QIOs, respectively—to disseminate successful patient safety interventions in hospitals through training and technical assistance. While the two efforts are very similar in this respect, compared to QIOs, the activities of HENs target more hospital-acquired conditions and focus on a broader population that includes non-Medicare patients.officials also told us that the work of HENs and QIOs is intended to be complementary and that HENs reinforce and expand on work already being done by QIOs in order to reduce hospital-acquired conditions and 30-day hospital readmissions at a faster rate. While QIOs may have established relationships with certain hospitals in their states, as of September 2012, CMS officials said that HENs had engaged a much wider network of hospitals in patient safety interventions when compared with QIOs—about 4,000 versus just over 800 respectively. Officials said that one reason for this is that HENs focus exclusively on hospitals whereas QIOs are responsible for implementing improvement projects across all settings of care. Additionally, officials said that because hospital system organizations serve as HENs, they can leverage their member hospitals to encourage these hospitals to adopt patient safety interventions. Over the period of our review, we identified a number of mechanisms the Innovation Center uses to coordinate its work in order to avoid unnecessary duplication in models that overlap with efforts of other CMS offices. In using these mechanisms, the center has engaged in key practices that we identified in prior work as helping enhance and sustain collaboration, such as leveraging resources, establishing compatible policies and procedures, and developing ways to report on results across offices. The mechanisms the Innovation Center uses are the following: Committees and boards. The Innovation Center uses a number of committees and boards to coordinate with other offices. For example, CMS officials told us that in deciding whether to select a model for testing, the Innovation Center’s Portfolio Management Committee considers other efforts within CMS—as well as more broadly across HHS—that may overlap with the model in order to avoid unnecessary duplication. Officials said that when overlap is identified, the decision to continue with the model is made on a case-by-case basis and involves a determination of whether the model is significantly different from existing efforts. Additionally, members of the Portfolio Management Committee are able to help identify staff in other offices that the Innovation Center might want to invite to work on a model in order to leverage existing agency expertise. In another example, CMS’s Enterprise Management Board brings together relevant offices across the agency, such as the Chief Operating Officer, the Office of Acquisition and Grants Management, and the Center for Medicare, early in a model’s implementation to determine what needs to be done operationally. To avoid unnecessary duplication, the board considers whether there are existing CMS resources that could be leveraged for the model’s infrastructure needs or whether a resource being developed for an Innovation Center model could be shared with other CMS efforts. Model approval process. According to CMS officials, the process CMS uses to approve Innovation Center models for implementation also allows the center to coordinate with other CMS offices. Officials explained that as part of this process, all CMS offices must have the opportunity to review and comment on the ICIP—a document that contains key information on a proposed model, such as design parameters and cost estimates—before the model is approved by the CMS administrator. Officials said that under CMS policy, the Innovation Center must address these comments. The ICIP contains sections that specifically address issues related to overlap, such as a section on “Synergy with Existing or Planned Initiatives” and a section on “Uniqueness/Innovation.” CMS officials said that, as a result, when the ICIP is circulated, if the Innovation Center did not sufficiently coordinate with other CMS centers or offices during the initial selection of a model, these offices would have the opportunity to raise any concerns related to unnecessary duplication. After a model is approved by CMS, HHS and OMB also review and approve the ICIP. Multi-office meetings at the staff, director, and agency level. First, CMS officials said that staff from the Innovation Center meet with staff from other offices to work on efforts that overlap. For example, during planning for its ACO models, the Innovation Center met with the Center for Medicare to establish compatible policies and procedures with the Shared Savings Program, such as developing common scripts for 1-800-MEDICARE call centers and rules for elevating beneficiary or provider questions to these centers for additional review. Additionally, in March 2012, the Innovation Center started meeting with CCSQ every week to discuss coordination between HENs and QIOs in order to prevent unnecessary duplication of effort. Second, CMS officials told us that there is regular coordination between the director of the Innovation Center and certain other CMS centers and offices, through meetings that happen on a weekly, biweekly, or monthly basis. Officials said that, among other things, these meetings are intended to share the results of ongoing efforts and address such issues as making sure policies are compatible across similar efforts. Officials also told us that all CMS offices have weekly issues meetings with the CMS Administrator that other offices involved in an issue being discussed are encouraged to attend. Officials told us that if staff from other CMS offices thought an issue related to overlapping efforts had not been adequately addressed through other coordination mechanisms, these meetings serve as an opportunity for them to raise it. Liaisons. Officials told us that staff members in other CMS offices serve as liaisons to the Innovation Center, though they are not formally designated as such. Officials said that these staff members primarily serve as a central point of contact so that there is a systematic way to keep track of coordination across offices. For example, CMCS has a staff member serving as a liaison to the Innovation Center who, among other things, ensures that the Innovation Center’s models employ policies and procedures that are compatible with Medicaid program rules. Targeted reviews. CMS officials said that as part of selecting participants for the Innovation Center’s Medicaid models, the Innovation Center works with CMCS, CMS regional offices, and OMB to ensure that the models do not duplicate funding for states that are already being funded to engage in the same activity through a CMCS demonstration. For example, the application for the Strong Start for Mothers and Newborns model—which tests, among other things, the effectiveness of three different approaches to providing enhanced prenatal care to Medicaid beneficiaries—specified that states that were already paying for enhanced prenatal services were not allowed to participate in the model. While the Innovation Center uses these mechanisms, it is also still working on ways to make its coordination with other offices more systematic. Specifically, CMS officials said that while some of the Innovation Center’s coordination mechanisms are formalized through documented policies and procedures, the center is considering the extent to which additional policies and procedures are needed. For example, officials said that while the Enterprise Management Board, which is responsible for addressing how models are coordinated with other CMS efforts operationally, is formally established through a written charter, they have considered whether a similar group that deals with coordination at the policy level needs a more formal structure in place. In another example, the Innovation Center has directed the outside firm that began an evaluation of Innovation Center operations in June 2012 to consider, as part of its statement of work, whether there are any gaps in current center policies and procedures—including those related to coordination with other offices—and to propose solutions to those gaps. The Innovation Center is also currently developing a process to ensure that CMS does not pay for the same service under both HEN and QIO contracts. Officials said that CMS recognizes there are areas of overlap between HENs and QIOs and that they made an explicit decision to include overlapping activities in HEN and QIO statements of work, because, among other things, the nature of trying to reduce hospital- acquired conditions and readmissions requires multiple entities working from different perspectives in a reinforcing manner. Although the HEN and QIO contractors were originally told to work out areas of overlap locally, largely because of questions asked during our review, officials recognized the need for a more-formal process to ensure coordination was working in practice. CMS officials said that a review of the 26 HEN contracts is under way to identify if any unnecessary duplication of effort has occurred—that is, whether HENs and QIOs are conducting the same activities in the same hospital. Officials noted that the review process has evolved and may continue to evolve over time, in part because of the size of the review—which includes reviewing HENs’ activities in approximately 4,000 hospitals—and in part because the Innovation Center has not conducted this type of review previously. CMS officials said that they will take steps, including potentially modifying HEN or QIO contract language, to eliminate any unnecessary duplication of effort that the review identifies and to document how this duplication was addressed. Finally, officials noted that CMS is in the process of developing a centralized database, which may also help the Innovation Center make its coordination more systematic. Among other things, officials said that the database is intended to help prevent duplicative payments to providers that participate in CMS efforts involving incentive payments for meeting specified quality of care and cost savings thresholds, such as the Innovation Center’s ACO models and the Center for Medicare’s Shared Savings Program. Specifically, officials said that the database is intended to track which beneficiaries are participating in different efforts across CMS to help ensure that beneficiaries are not counted twice for the purposes of calculating incentive payments. While officials reported that the database initially became operational in June 2012, they also said that they are currently working on significant system upgrades that are expected in September 2012. The Innovation Center became operational in November 2010 and is still in the early stages of implementing its first models, with much work— particularly evaluation activities—to be done in coming years. As of March 31, 2012, the Innovation Center had announced 17 models, covering a variety of topics, to test new approaches in health care delivery and payment. In addition, the Innovation Center has developed preliminary evaluation plans for each of the 17 models, although at the time of our review, most still needed to be finalized, and it may take as long as 3-5 years until the evaluations begin to produce results. With spending on health care in the United States continuing to increase, and an appropriation of $10 billion every 10 years, it is important that the Innovation Center continue the testing of its models and conduct evaluations as planned in order for CMS to determine the extent to which the new approaches are able to reduce costs and improve quality of care. At the time of our review, we identified three key examples of Innovation Center models that overlap with efforts being conducted by other offices within CMS. As the Innovation Center and other CMS offices work in similar areas—namely paying for and delivering health care to Medicare and Medicaid beneficiaries—there likely will be additional efforts that overlap as the center continues to build its portfolio of models and initiatives. We encourage these efforts to the extent that they are complementary, well coordinated, and do not result in unnecessary duplication. However, our review also suggests that while the Innovation Center has taken steps to coordinate with other offices, it still has work to do in making this coordination more systematic. For example, the Innovation Center is considering whether additional policies and procedures are needed to coordinate its efforts with other offices, and it will be important for the center to continue to determine the extent to which this is necessary, particularly as it considers the results of the evaluation by an outside firm. In addition, the Innovation Center is still implementing a process to ensure that CMS does not make payments for duplicative services under HEN contracts in its Partnership for Patients model—one of its first and most expensive models to date—and QIO contracts. Given the significance of the Innovation Center’s work, and the amount of money involved in its operation, having appropriate and well- documented coordination mechanisms in place will be an important step going forward to help ensure that resources are used most efficiently and any overlapping efforts do not become unnecessarily duplicative. In order to ensure the efficient use of federal resources, we recommend that the Administrator of CMS direct the Innovation Center to expeditiously complete implementation of its process to review and eliminate any areas of unnecessary duplication in the services being provided by HENs and QIOs in hospitals. We provided a draft of this report to HHS for review and comment. In its written comments, reproduced in appendix III, HHS agreed with our recommendation and provided general comments. In addition, on October 26, 2012, the Innovation Center’s Deputy Director for Operations provided oral technical comments that were incorporated, as appropriate. In its written comments, HHS stated that it concurred with our recommendation to expeditiously complete implementation of its process to review and eliminate any areas of unnecessary duplication in the services being provided by HENs and QIOs. HHS described the steps underway to identify and eliminate any duplication of effort, including (1) having Contracting Officer Representatives assess whether there are areas of duplication that require further review and recommend appropriate actions for each contract and (2) if appropriate, putting in place acceptable mitigation strategies, issuing technical direction, or modifying the appropriate contract to eliminate the duplication of effort. HHS stated that it anticipates completing these steps by December 31, 2012, and has monitoring plans in place to assess future changes in the work plans of QIOs and HENs to avoid future duplication. In its written comments, HHS also stated that only one of the three key examples of overlap cited in the report—the HEN and QIO example— poses a risk of duplicative effort. We agree, and the recommendation we make focuses on this example. The other two key examples we described in our report are overlapping in that they share similar goals, engage in similar activities or strategies to achieve these goals, or target similar populations. We noted that these efforts have important differences and that CMS officials said the efforts were intended to be complementary to each other. Because the Innovation Center and other CMS offices work in similar areas—namely paying for and delivering health care to Medicare and Medicaid beneficiaries—we observed that there will likely be efforts that overlap. As we reported, we encourage these efforts to the extent that they are complementary, well coordinated, and do not result in unnecessary duplication. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of HHS, the Administrator of CMS, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Estimated number and type of beneficiaries affected Type of participants State Demonstrations to Integrate Care for Medicare-Medicaid Beneficiaries—Supports state Medicaid programs in designing new approaches to service delivery and financing in order to integrate care for Medicare-Medicaid beneficiaries. This program will enable states to participate in the Financial Alignment Model (see below), which will enroll beneficiaries in 2013. Total funding in millions of dollars (lifetime of model) Incentives for Prevention of Chronic Diseases in Medicaid—Tests the impact of providing incentives to Medicaid beneficiaries to participate in prevention programs such as those that address tobacco cessation, controlling or reducing weight, lowering cholesterol, lowering blood pressure, and avoiding the onset of diabetes. Not available at the time of our review Federally Qualified Health Center Advanced Primary Care Practice—Tests the effect of the advanced primary care practice model—commonly referred to as the patient-centered medical home—in improving care, promoting health, and reducing the cost of care provided to Medicare beneficiaries by Federally Qualified Health Centers. Federally Qualified Health Centers are health centers that have received a “Federally Qualified Health Center” designation from the Centers for Medicare & Medicaid Services (CMS) and provide comprehensive community-based primary and preventive care services in medically underserved areas or to medically underserved populations. Partnership for Patients: Community Based Care Transitions—Tests approaches to reduce unnecessary hospital readmissions by improving the transition of Medicare beneficiaries from the inpatient hospital setting to home or other care settings. Hospitals with high readmission rates that partner with community-based organizations that provide care transition services Partnership for Patients: Hospital Engagement Networks and Other Strategies—Tests the effectiveness of multiple strategies to reduce preventable hospital-acquired conditions—conditions that a patient acquires while an inpatient in the hospital, such as catheter-associated urinary tract infections or injuries from falls—and 30-day hospital readmissions. One example of a strategy used by the Partnership for Patients is contracting with Hospital Engagement Networks—which are state, regional, and national hospital system organizations—to disseminate successful patient safety interventions in hospitals through training and technical assistance. Total funding in millions of dollars Type of participants (lifetime of model) Pioneer Accountable Care Organization (ACO) Model—Tests the effectiveness of allowing experienced ACOs to take on financial risk in improving quality and lowering costs for all of their Medicare patients. An ACO refers to a group of providers and suppliers of services, such as hospitals and physicians, that work together to coordinate care for the patients they serve. ACOs with at least 15,000 Medicare fee-for-service beneficiaries (or at least 5,000 Medicare beneficiaries in the case of rural areas) Treatment of Certain Complex Diagnostic Laboratory Tests—Tests the effect of making separate payments for certain complex diagnostic laboratory tests on access to care, quality of care, health outcomes, and expenditures. Strong Start for Mothers and Newborns—Tests two strategies to improve outcomes for newborns and pregnant women: (1) shared learning and diffusion activities to reduce the rate of early elective deliveries among pregnant women and (2) enhanced prenatal care to reduce preterm births (less than 37 weeks) in women covered by Medicaid. Each of these strategies addresses three different approaches to achieving these goals. Strategy 1: This model targets all patients receiving related services Strategy 2: 90,000 Medicaid beneficiaries 2 years; Strategy 2: 4 years Advance Payment ACO Model—Tests the effect of prepayment of shared savings to support ACO infrastructure development and care coordination on quality and costs of care for Medicare beneficiaries. Small physician-led or rural organizations participating in the Medicare Shared Savings Program Independence at Home Demonstration—Tests the effectiveness of delivering an expanded scope of primary care services in a home setting on improving care for Medicare beneficiaries with multiple chronic conditions. Physician practices with at least 200 high-need beneficiaries Health Care Innovation Awards—Tests a variety of innovative approaches to paying for and delivering care that have a focus on those that will train and deploy the health care workforce to support these innovations. Not available at the time of our review Medicaid Emergency Psychiatric Demonstration—Tests whether Medicaid can support higher quality care at lower cost by reimbursing private psychiatric hospitals for certain services for which Medicaid reimbursement has historically been unavailable. Type of participants Graduate Nurse Education Demonstration—Tests the effect of offsetting the costs of clinical training for Advanced Practice Registered Nurses on the availability of graduate nursing students enrolled in APRN training programs. (lifetime of model) Hospitals, schools of nursing, and non-hospital-based community-based care settings Comprehensive Primary Care Initiative—Tests the impact of enhanced primary care services, including care coordination, prevention, and 24-hour access for Medicare and Medicaid beneficiaries. Up to 315,000 Medicare and 16,000 Medicaid beneficiaries Initiative to Reduce Avoidable Hospitalizations Among Nursing Facility Residents—Tests partnerships between independent organizations and long-stay nursing facilities to enhance on-site services to reduce inpatient hospitalizations for Medicare-Medicaid beneficiaries. Organizations that partner with states and nursing facilities to provide enhanced care coordination Bundled Payment for Care Improvement—Tests the effect of different payment approaches that link payments for multiple services received by patients during an episode of care, including hospitalization and posthospital services, on the coordination of patient care. Four different models of bundling will be tested, but information on models 2 through 4 was not available at the time of our review. Not available at the time of our review Providers such as hospitals, physician group practices, and health systems Financial Alignment Initiative—Tests two approaches to integrating the service delivery and financing of the Medicare and Medicaid programs to better coordinate care for Medicare-Medicaid beneficiaries: a capitated approach where a state, CMS, and a health plan enter into a three-way contract to provide comprehensive coordinated care; and a managed fee-for-service approach where a state and CMS enter into an agreement where the state would be eligible to benefit from savings resulting from its initiatives designed to improve quality and reduce costs. Model 1: 389,000 Medicare fee-for-service beneficiaries Up to 2 million Medicare-Medicaid beneficiaries 73 eligibility requirement for Medicaid. For this report we use the term “Medicaid” to include both Medicaid and the State Children’s Health Insurance Program. Section 4108 requires the award of grants to states to test approaches that may encourage behavior modification and determine scalable solutions by providing incentives to Medicaid beneficiaries. § 4108, 124 Stat. at 561-564 (codified at 42 U.S.C. §1396a note). Section 4108 appropriated $100 million for a 5-year period beginning on January 1, 2011. The amount appropriated is to remain available until expended. Section 3026 requires the implementation of a model that tests whether partnerships between high- admission-rate hospitals and community-based service organizations can improve transition care services for high-risk Medicare beneficiaries § 3026, 124 Stat. at 413 - 415 (codified at 42 U.S.C. § 1395b-1 note). Section 3026 requires the transfer of $500 million from Medicare trust funds for the period of fiscal years 2011 through 2015. The amount transferred is to remain available until expended. Section 3113 requires CMS to develop appropriate payment rates for the tests included in this demonstration. § 3113, 124 Stat. at 422-423 (codified at 42 U.S.C. § 1395I note). ISection 3113 requires the transfer of $5 million from the Medicare Part B trust fund for administering the demonstration. The amount transferred is to remain available until expended. Payments under the demonstration are to be made from Medicare Part B funds and may not exceed $100 million. Section 3024 requires CMS to conduct a demonstration to test a payment and service-delivery model that utilizes physician- and nurse practitioner–directed home-based primary care teams for reducing expenditures and improving the health outcomes of certain Medicare beneficiaries. §§ 3204, 10308(b)(2). 124 Stat. at 404-408, 942 (codified at 42 U.S.C. § 1395cc-5). Section 3024 requires the transfer of $5 million from Medicare trust funds for each of fiscal years 2010 through 2015. The amounts transferred are to remain available until expended. Section 2707 requires CMS to select states to participate in the demonstration project on a competitive basis. §2707, 124 Stat. at 326-328 (codified at 42 U.S.C. § 1396a note). Section 2707 appropriated $75 million for fiscal year 2011. The amount appropriated is to remain available through December 31, 2015. Section 5509 requires CMS to conduct a demonstration under which eligible hospitals receive payment for their reasonable costs for the provision of qualified clinical training to advanced practice nurses. § 5509, 124 Stat. at 674-676 (codified at 42 U.S.C § 1395ww note). Section 5509 appropriated $50 million for each of fiscal years 2012 through 2015. The amount appropriated is to remain available until expended. The Center for Medicare and Medicaid Innovation (Innovation Center) solicits and receives ideas for different payment and care delivery approaches through “Listening Sessions” and through its web-based idea-submission tool. The Innovation Center reviews ideas that have been submitted and evaluates them with respect to their potential to meet its primary goals of better health care, better health, and reduced costs. It reviews ideas against “Portfolio Criteria” that were created to guide the Innovation Center in developing a portfolio of models that address a range of populations, issues, problems, and solutions. Examples of these criteria include: having the greatest potential impact on Medicare and Medicaid beneficiaries and improving how care is delivered nationally; focusing on health conditions that offer the greatest opportunity to improve care and reduce costs; and meeting the needs of the most vulnerable and addressing disparities in care. Develop an Innovation Center Investment Proposal (ICIP) As part of this selection process, the Innovation Center reviews model types suggested in the Patient Protection and Affordable Care Act (PPACA) provision that established the center, and seeks input from across the Centers for Medicare & Medicaid Services (CMS), the Department of Health and Human Services (HHS), and other federal partners and from an array of external stakeholders. Once the Innovation Center identifies a payment and care delivery model that shows promise, it develops an ICIP, which typically includes a proposed design for the model including the size and scope of testing, the population and programs involved, and duration; a summary of prior evidence and supporting research; a preliminary evaluation plan including research questions, proposed measures related to cost and quality, and discussion of the model’s expected impact; and an implementation plan, including the application and selection process, an analysis of whether the model overlaps or complements other initiatives, and an analysis of the potential for expansion of the model. The Innovation Center prepares separate documents for approval that are related to funding requests and solicitations associated with the model. The Innovation Center seeks approval for the model. This includes separate approval processes for the ICIP, for model funding, and for any solicitations that would be issued to potential participants. The approval process includes a sequence of reviews within CMS, within HHS, and finally with OMB. During these reviews, modifications may be made on the basis of input from individuals in other CMS centers and offices, in other related HHS programs, and from OMB. Once the ICIP is approved, the Innovation Center issues an announcement and other information about the model to the public. The Innovation Center issues information about how to apply for participation in the model, including information about which types of providers or organizations are eligible to participate, the process for submitting applications, and the selection process. The Innovation Center may also organize webinars or learning sessions open to the public and interested participants to share information and answer questions. Innovation Center models vary by the type of participant that is involved—for example, physician group practices, health plans, and state Medicaid programs. Models also vary in terms of the type of agreement that is established with participants, for example, whether it is a grant, a cooperative agreement, a contract, or a provider agreement. The selection process for participants is generally competitive. The criteria used in the selection process may vary by model. For example, selection criteria may include such factors as organizational capabilities and plans for ensuring quality of care. In other cases, eligible participants may be selected in order to achieve a mix and balance of certain characteristics for evaluation purposes, for example geographic location (urban, rural) and whether the participant uses electronic health records. The Innovation Center solicits and hires contractors to evaluate the model. Applicants are asked to propose specific evaluation approaches to the preliminary evaluation plans that the Innovation Center has identified. Contractors are selected through a competitive process. Once a contractor is selected, it works with the Innovation Center to complete a design phase and reach agreement on the final evaluation plan for the model. The Innovation Center also engages contractors for other purposes that are part of implementation, such as data collection and provider recruitment. The changes that the model is testing—for example, changes to health care delivery or payment—are put into effect by CMS and by participants. The testing period for Innovation Center models is typically set for 3 to 5 years. However, evaluation monitoring may indicate that the model should be modified, terminated, or expanded before this period ends (see below). The Innovation Center may choose to shorten the test period for a model for such reasons. Conduct evaluation of model to assess its impact on cost and quality Data are collected for cost and quality measures. Using a variety of statistical techniques, these data are generally compared to data for a comparison group representing patients or providers that are not participating in the model to determine the model’s impact on cost and quality. When comparison groups are not possible, data for model participants are compared to “baseline” data that represent a period prior to the test period. Qualitative information on the different strategies participants may use to deliver care under each model is also collected and analyzed. During the testing period information collected is shared on a regular basis with participants. The purpose of this “rapid cycle” feedback is to provide timely information so that participants can make improvements during the testing period. The Innovation Center plans to regularly review each model’s impact on the quality and cost of care to determine whether the payment or delivery approach is successful and should be recommended for expansion into the Medicare or Medicaid program. If the Innovation Center seeks to expand a program, the CMS Office of the Actuary must certify that the model would either (1) result in cost savings or (2) not result in any increase in costs if implemented on a broader scale within Medicare or Medicaid, or both. The Innovation Center’s criteria can be found at: http://www.innovations.cms.gov/about/our-portfolio- criteria/index.html (accessed Sept. 13, 2012). In addition to the contact named above, Kristi Peterson, Assistant Director; Krister Friday; Mary Giffin; Samantha Poppe; Rachel Svoboda; and Jennifer Whitworth made key contributions to this report.
|
PPACA created the Innovation Center within CMS. The purpose of the Innovation Center is to test new approaches to health care delivery and payment--known as models--for use in Medicare or Medicaid. GAO was asked to review the implementation of the Innovation Center. Specifically, GAO: (1) describes the center's activities, funding, organization, and staffing as of March 31, 2012; (2) describes the center's plans for evaluating its models and its own performance; and (3) examines whether efforts of the center overlap with those of other CMS offices and how the center coordinates with other offices. GAO analyzed budget and staffing data; reviewed available documentation, such as Innovation Center policies and procedures and functional statements for CMS offices; and interviewed officials from the Innovation Center and other CMS offices, such as the Center for Medicare. GAO assessed how the Innovation Center coordinates in the context of federal internal control standards and key practices for collaboration from prior GAO work. From the time it became operational in November 2010, through March 31, 2012, the Center for Medicare and Medicaid Innovation (Innovation Center) has focused on implementing 17 new models to test different approaches for delivering or paying for health care in Medicare and Medicaid. The center is still relatively early in the process of implementing these models. Eleven of the models were selected by the Innovation Center under the provision in the Patient Protection and Affordable Care Act (PPACA) that established the center, while the remaining 6 were specifically required by other PPACA provisions. The Innovation Center projects that a total of $3.7 billion will be required to fund testing and evaluation of the 17 models, with the expected funding for individual models ranging from $30 million to $931 million. As of March 2012, the center's 184 staff were organized into four groups responsible for coordinating the implementation of different models and another five groups responsible for key functions that support model implementation. Officials said that, among other things, the center's initial hiring of staff reflected the need for leadership and for specific types of expertise, such as individuals with a background in evaluation. The Innovation Center's plans for evaluating individual models include identifying measures related to the cost and quality of care. Officials from the Centers for Medicare & Medicaid Services (CMS) told GAO that the Innovation Center had developed preliminary evaluation plans for the 17 models being implemented that, among other things, identified proposed measures. According to CMS officials, these measures will be finalized by contractors responsible for evaluating, on behalf of CMS, each model's impact on cost and quality. As of August 1, 2012, the Innovation Center had contracted for the evaluation of 10 of the 17 models. The center's plans for evaluating its own performance include aggregating data across models by using a set of core measures it has developed. In addition, the Innovation Center has taken steps to monitor its progress in implementing the 17 models through biweekly reviews of standard milestones and related data, such as the number of applications to participate in a model the center has received. GAO identified three key examples of overlap between the 17 Innovation Center models and the efforts of other CMS offices, meaning that the efforts share similar goals, engage in similar activities or strategies to achieve these goals, or target similar populations. However, these overlapping efforts also have differences, and CMS officials said the efforts are intended to be complementary to each other. GAO also identified a number of mechanisms the Innovation Center uses to coordinate its work in order to avoid unnecessary duplication between its models and other efforts, such as multi-office meetings at the staff, director, and agency level. Further, through using these mechanisms, the Innovation Center has engaged in key practices for collaboration, including leveraging resources across offices. At the same time, the center is still working on ways to make its coordination more systematic. For example, largely because of questions raised during GAO's review, the Innovation Center initiated a process to ensure that CMS does not pay for the same service under the contracts in one of its models and those in another CMS office. However, officials told GAO that the center is still working on implementing this process and may need to take additional steps to eliminate any unnecessary duplication. GAO is recommending that the Administrator of CMS direct the Innovation Center to expeditiously complete its process to review and eliminate any areas of unnecessary duplication in contracts that have been awarded in one of its models. HHS agreed with this recommendation and described steps it is taking to address unnecessary duplication. GAO is recommending that the Administrator of CMS direct the Innovation Center to expeditiously complete its process to review and eliminate any areas of unnecessary duplication in contracts that have been awarded in one of its models. HHS agreed with this recommendation and described steps it is taking to address unnecessary duplication.
|
DOD is one of the largest and most complex organizations in the world. Overhauling its business operations will take many years to accomplish and represents a huge and possibly unprecedented management challenge. Execution of DOD’s operations spans a wide range of defense organizations, including the military departments and their respective major commands and functional activities, numerous large defense agencies and field activities, and various combatant and joint operational commands that are responsible for military operations in specific geographic regions or theaters of operation. To support DOD’s operations, the department performs an assortment of interrelated and interdependent business functions—using thousands of business systems—related to major business areas such as weapon systems management, supply chain management, procurement, health care management, and financial management. The ability of these systems to operate as intended affects the lives of our warfighters both on and off the battlefield. To address long-standing management problems, we began our high-risk series in 1990 to identify and help resolve serious weaknesses in areas that involve substantial resources and provide critical services to the public. Historically, high-risk areas have been designated because of traditional vulnerabilities related to their greater susceptibility to fraud, waste, abuse, and mismanagement. As our high-risk program has evolved, we have increasingly used the high-risk designation to draw attention to areas associated with broad-based transformation needed to achieve greater economy, efficiency, effectiveness, accountability, and sustainability of selected key government programs and operations. DOD has continued to dominate the high-risk list, bearing responsibility, in whole or in part, for 15 of our 27 high-risk areas. Of the 15 high-risk areas, the 8 DOD-specific high-risk areas cut across all of DOD’s major business areas. Table 1 lists the 8 DOD-specific high-risk areas and the year in which each area was designated as high risk. In addition, DOD shares responsibility for 7 governmentwide high-risk areas. GAO designated DOD’s approach to business transformation as high risk in 2005 because (1) DOD’s improvement efforts were fragmented, (2) DOD lacked an enterprisewide and integrated business transformation plan, and (3) DOD had not appointed a senior official at the right level with an adequate amount of time and appropriate authority to be responsible for overall business transformation efforts. Collectively, these high-risk areas relate to DOD’s major business operations, which directly support the warfighter, including how servicemembers get paid, the benefits provided to their families, and the availability of and condition of the equipment they use both on and off the battlefield. DOD’s pervasive business systems and related financial management deficiencies adversely affect its ability to assess resource requirements; control costs; ensure basic accountability; anticipate future costs and claims on the budget; measure performance; maintain funds control; prevent and detect fraud, waste, and abuse; and address pressing management issues. Over the years, DOD initiated numerous efforts to improve its capabilities to efficiently and effectively support management decisionmaking and reporting, with little success. Therefore, we first designated DOD’s business systems modernization and financial management as high-risk areas in 1995, followed by its approach to business transformation in 2005. The business systems modernization high-risk area is large, complex, and integral to each of the other high-risk areas, as modernized systems are pivotal enablers to addressing longstanding transformation, financial, and other management challenges. DOD reportedly relies on approximately 3,000 business systems to support its business functions. For fiscal year 2007, Congress appropriated approximately $15.7 billion to DOD, and for fiscal year 2008, DOD has requested about $15.9 billion in appropriated funds to operate, maintain, and modernize these business systems and the associated infrastructures, of which approximately $11 billion was requested for the military departments. For years, DOD has attempted to modernize its many systems, and we have provided numerous recommendations to help it do so. For example, in 2001, we provided the department with a set of recommendations to help in developing and using an enterprise architecture (modernization blueprint) and establishing effective investment management controls to guide and constrain how the billions of dollars each year are spent on business systems. We also made numerous project-specific and DOD-wide recommendations aimed at ensuring that the department follows proven best practices when it acquires IT systems and services. Effective use of an enterprise architecture, or modernization blueprint, is a hallmark of successful public and private organizations. For more than a decade, we have promoted the use of architectures to guide and constrain systems modernization, recognizing them as a crucial means to a challenging goal: agency operational structures that are optimally defined in both the business and technological environments. Congress has also recognized the importance of an architecture-centric approach to modernization: the E-Government Act of 2002, for example, requires the Office of Management and Budget (OMB) to oversee the development of enterprise architectures within and across agencies. In brief, an enterprise architecture provides a clear and comprehensive picture of an entity, whether it is an organization (e.g., a federal department) or a functional or mission area that cuts across more than one organization (e.g., financial management). This picture consists of snapshots of both the enterprise’s current or “As Is” environment and its target or “To Be” environment. These snapshots consist of “views,” which are one or more architecture products (models, diagrams, matrices, text, etc.) that provide logical or technical representations of the enterprise. The architecture also includes a transition or sequencing plan, based on an analysis of the gaps between the “As Is” and “To Be” environments; this plan provides a temporal road map for moving between the two that incorporates such considerations as technology opportunities, marketplace trends, fiscal and budgetary constraints, institutional system development and acquisition capabilities, the dependencies and life expectancies of both new and “legacy” (existing) systems, and the projected value of competing investments. Our experience with federal agencies has shown that investing in IT without defining these investments in the context of an architecture often results in systems that are duplicative, not well integrated, and unnecessarily costly to maintain and interface. A corporate approach to IT investment management is also characteristic of successful public and private organizations. Recognizing this, Congress developed and enacted the Clinger-Cohen Act in 1996, which requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by executive agencies. In response to the Clinger-Cohen Act and other statutes, OMB developed policy for planning, budgeting, acquisition, and management of federal capital assets and issued guidance. We have also issued guidance in this area, in the form of a framework that lays out a coherent collection of key practices that when implemented in a coordinated manner, can lead an agency through a robust set of analyses and decision points that support effective IT investment management. This framework defines institutional structures, such as investment review boards, and associated processes, such as common investment criteria. Further, our investment management framework recognizes the importance of an enterprise architecture as a critical frame of reference for organizations making IT investment decisions. Specifically, it states that only investments that move the organization toward its target architecture, as defined by its sequencing plan, should be approved (unless a waiver is provided or a decision is made to modify the architecture). Moreover, it states that an organization’s policies and procedures should describe the relationship between its architecture and its investment decision-making authority. Our experience has shown that mature and effective management of IT investments can vastly improve government performance and accountability, and can help to avoid wasteful IT spending and lost opportunities for improvements. A major component of DOD’s business transformation strategy is its FIAR Plan, issued in December 2005 and updated annually in June and September. The FIAR Plan was issued pursuant to section 376 of the National Defense Authorization Act for Fiscal Year 2006. Section 376 limited DOD’s ability to obligate or expend funds for fiscal year 2006 on financial improvement activities until the department submitted a comprehensive and integrated financial management improvement plan to congressional defense committees. Section 376 required the plan to (1) describe specific actions to be taken to correct deficiencies that impair the department’s ability to prepare timely, reliable, and complete financial management information and (2) systematically tie such actions to process and control improvements and business systems modernization efforts described in the business enterprise architecture and transition plan. The John Warner National Defense Authorization Act for Fiscal Year 2007 continued to limit DOD’s ability to obligate or expend funds for financial improvement until the Secretary of Defense submits a determination to the committees that the activities are consistent with the plan required by section 376. DOD intends for the FIAR Plan to provide DOD components with a road map for resolving problems affecting the accuracy, reliability, and timeliness of financial information, and obtaining clean financial statement audit opinions. As such, the FIAR Plan greatly depends on the actions taken by DOD components, including efforts to (1) develop and implement systems that are in compliance with DOD’s BEA, (2) implement sustained improvements in business processes and controls to address material weaknesses, and (3) achieve clean financial statement audit opinions. The FIAR Plan uses an incremental approach to structure its process for examining operations, diagnosing problems, planning corrective actions, and preparing for audit. Although the FIAR Plan provides estimated timeframes for achieving auditability in specific areas or components, it does not provide a specific target date for achieving a clean audit opinion on the departmentwide financial statements. Rather, the FIAR Plan recognizes that its ability to fully address DOD’s financial management weaknesses and ultimately achieve clean audit opinions will depend largely on the efforts of its components to successfully implement new business systems on time, within budget, and with the intended capability. DOD’s leaders have demonstrated a commitment to making the department’s business transformation a priority and made progress in establishing a management framework for these efforts. For example, the Deputy Secretary of Defense has overseen the establishment of various management entities and the creation of plans and tools to help guide business transformation at DOD. However, our analysis has shown that these efforts are largely focused on business systems modernization and that ongoing efforts across the department’s business areas are not adequately integrated. In addition, DOD lacks two crucial features that are integral to successful organizational transformation: (1) a strategic planning process that results in a comprehensive, integrated, and enterprisewide plan or interconnected plans and (2) a senior leader who is responsible and accountable for business transformation and who can provide full-time focus and sustained leadership. DOD’s senior leadership has shown commitment to transforming the department’s business operations, and DOD has taken a number of positive steps to begin this effort. Because of the impact of the department’s business operations on its warfighters, DOD recognizes the need to continue working toward transforming its business operations and providing transparency in this process. The department has devoted substantial resources and made important progress toward establishing key management structures and processes to guide business systems investment activities, particularly at the departmentwide level, in response to legislation that codified many of our prior recommendations related to DOD business systems modernization and financial management. Specifically, in the past few years, DOD has established the Defense Business Systems Management Committee, investment review boards, and the Business Transformation Agency to manage and guide business systems modernization. The Defense Business Systems Management Committee and investment review boards were statutorily required by the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 to review and approve the obligation of funds for defense business systems modernization, depending on the cost and scope of the system in review. The Business Transformation Agency was created to support the top-level management body, the Defense Business Systems Management Committee, and to advance DOD-wide business transformation efforts. Additionally, DOD has developed a number of tools and plans to enable these management entities to help guide business systems modernization efforts. The tools and plans include the BEA and the ETP. The ETP is currently considered the highest-level plan for DOD business transformation. According to DOD, the ETP is intended to summarize all levels of transition planning information (milestones, metrics, resource needs, and system migrations) as an integrated product for communicating and monitoring progress, resulting in a consistent framework for setting priorities and evaluating plans, programs, and investments. Our analysis of these tools, plans, and meeting minutes of the various transformational management entities shows that these efforts are largely focused on business systems modernization, and that this framework has yet to be expanded to encompass all of the elements of overall business transformation. Furthermore, DOD has not clearly defined or institutionalized in directives the interrelationships, roles and responsibilities, or accountability for the various entities that make up its management framework for overall business transformation. For example, opinions differ within DOD as to which senior governance body will serve as the primary body responsible for overall business transformation. Some officials stated that the Defense Business Systems Management Committee would serve as the senior-most governance entity, while others stated that the Deputy’s Advisory Working Group, a group that provides departmentwide strategic direction on various issues, should function as the primary decision-making body for business transformation. Additionally, opinions differ between the two entities regarding the definition of DOD’s key business areas, with the Defense Business Systems Management Committee and the Business Transformation Agency using a broader definition of business processes than that of the Deputy’s Advisory Working Group and its supporting organizations. Until such differences are resolved and the department institutionalizes a management framework that spans all aspects of business transformation, DOD will not be able to integrate related initiatives into a sustainable, enterprisewide approach and to resolve weaknesses in business operations. As we have testified and reported for years, a successful, integrated, departmentwide approach to addressing DOD’s overall business transformation requires two critical elements: a comprehensive, integrated, and enterprisewide plan and an individual capable of providing full-time focus and sustained leadership both within and across administrations, dedicated solely to the integration and execution of the overall business transformation effort. DOD continues to lack a comprehensive, integrated, and enterprisewide plan or set of linked plans for business transformation that is supported by a comprehensive planning process and guides and unifies its business transformation efforts. Our prior work has shown that this type of plan should help set strategic direction for overall business transformation efforts and all key business functions; prioritize initiatives and resources; and monitor progress through the establishment of performance goals, objectives, and rewards. Furthermore, an integrated business transformation plan would be instrumental in establishing investment priorities and guiding the department’s key resource decisions. While various plans exist for different business areas, DOD’s various business-related plans are not yet integrated to include consistent reporting of goals, measures, and expectations across institutional, unit, and individual program levels. Our analysis shows that plan alignment and integration currently focus on data consistency among plans, meaning that plans are reviewed for errors and inconsistencies in reported information, but there is a lack of consistency in goals and measurements among plans. Other entities such as the Institute for Defense Analyses, the Defense Science Board, and the Defense Business Board have similarly reported the need for DOD to develop an enterprisewide plan to link strategies across the department for transforming all business areas and thus report similar findings. DOD officials recognize that the department does not have an integrated plan in place, although they have stated that their intention is to expand the scope of the ETP so that it becomes a more robust enterprisewide planning document and to evolve this plan into the centerpiece strategic document. DOD updates the ETP twice a year, once in March as part of DOD’s annual report to Congress and once in September, and DOD has stated the department’s goal is to evolve the plan into a comprehensive, top-level planning document for all business functions. DOD released the most recent ETP update on September 28, 2007, and we will continue to monitor developments in this effort. The National Defense Authorization Act for Fiscal Year 2008 requires the Secretary of Defense, acting through the CMO, to develop a strategic management plan to include detailed descriptions of such things as performance goals and measures for improving and evaluating the overall efficiency and effectiveness of the business operations of the department, key initiatives to achieve these performance goals, procedures to monitor progress, procedures to review and approve plans and budgets for changes in business operations, and procedures to oversee the development, review, and approval of all budget requests for defense business systems. While these provisions are extremely positive, their impact will depend on DOD’s implementation. We continue to believe that the key to success of any planning process is the extent to which key stakeholders participate, and whether the ultimate plan or set of plans is linked to the department’s overall strategic plan, reflects an integrated approach across the department, identifies performance goals and measures, shows clear linkage to budgets, and ultimately is used to guide business transformation. We have long advocated the importance of establishing CMO positions in government agencies, including DOD, and have previously reported and testified on the key characteristics of the position necessary for success. In our view, transforming DOD’s business operations is necessary for DOD to resolve its weaknesses in the designated high-risk areas and to ensure that the department has sustained leadership to guide its business transformation efforts. Specifically, because of the complexity and long- term nature of business transformation, DOD needs a CMO with significant authority, experience, and a term that would provide sustained leadership and the time to integrate its overall business transformation efforts. Without formally designating responsibility and accountability for results, DOD will face difficulties reconciling competing priorities among various organizations, and prioritizing investments will be difficult and could impede the department’s progress in addressing deficiencies in key business areas. Clearly, Congress has recognized the need for executive-level attention to business transformation matters and has taken specific action in the National Defense Authorization Act for Fiscal Year 2008 to codify CMO responsibilities at a high level in the department—assigning them to the Deputy Secretary of Defense—as well as other provisions, such as establishing a full-time Deputy CMO and designating CMO responsibilities within the military departments. From a historical perspective, this action is unprecedented and represents significant steps toward giving business transformation high-level management attention. Now that this legislation has been enacted, it will be important for DOD to define the specific roles and responsibilities for the CMO, Deputy CMO, and the service CMOs; ensure clearly delineated reporting relationships among them and other department and service officials; foster good executive-level working relationships for maximum effectiveness; establish appropriate integration and transformation structures and processes; promote individual accountability and performance; and provide for continuity. Further, in less than 1 year, our government will undergo a change in administrations, which raises questions about continuity of effort and the sustainability of the progress that DOD has made to date. As we have said before, business transformation is a long-term process, and continuity is key to achieving true transformation. One of the challenges now facing DOD, therefore, is establishing this continuity in leadership to sustain progress that has been made to date. In the interest of the department and the American taxpayers, we continue to believe the department needs a full-time CMO over the long term in order to devote the needed focus and continuity of effort to transform its key business operations and avoid billions more in waste each year. As such, we believe the CMO position should be codified as a separate position from the Deputy Secretary of Defense in order to provide full-time attention to business transformation and subject to an extended term appointment. The CMO’s appointment should span administrations to ensure that transformation efforts are sustained across administrations. Because business transformation is a long-term and complex process, a term of at least 5 to 7 years is recommended to provide sustained leadership and accountability. Moreover, the fact that the National Defense Authorization Act for Fiscal Year 2008 modifies politically appointed positions by codifying a new designation for the Deputy Secretary of Defense, creating a new Deputy Chief Management Officer of DOD, and adding a new designation to the military departments’ under secretary positions to serve as the military departments’ CMOs raises larger questions about succession planning and how the executive branch fills appointed positions, not only within DOD, but throughout the government. Currently, there is no distinction in the political appointment process among the different types of responsibilities inherent in the appointed positions. Further, the positions generally do not require any particular set of management qualifications, even though the appointees may be responsible for non-policy-related functions. For example, appointees could be categorized by the differences in their roles and responsibilities, such as by the following categories: those appointees who have responsibility for various policy issues; those appointees who have leadership responsibility for various operational and management matters; and those appointees who require an appropriate degree of technical competence or professional certification, as well as objectivity and independence (for example, judges, the Comptroller General, and inspectors general). We have asked for a reexamination of the political appointment process to assess these distinctions as well as which appointee positions should be presidentially appointed and Senate confirmed versus presidentially appointed with advance notification to Congress. For example, those appointees who have policy leadership responsibility could be presidentially appointed and Senate confirmed, while many of those with operational and management responsibility could be presidentially appointed, with a requirement for appropriate congressional notification in advance of appointment. In addition, appropriate qualifications for selected positions, including the possibility of establishing specific statutory qualifications criteria for certain categories of appointees, could be articulated. Finally, the use of term appointments and different compensation schemes for these appointees should be reviewed. Despite noteworthy progress in establishing institutional business system and management controls, DOD is still not where it needs to be in managing its departmentwide business systems modernization. Until DOD fully defines and consistently implements the full range of business systems modernization management controls (institutional and program- specific), it will not be positioned to effectively and efficiently ensure that its business systems and IT services investments are the right solutions for addressing its business needs, that they are being managed to produce expected capabilities efficiently and cost effectively, and that business stakeholders are satisfied. For decades, DOD has been attempting to modernize its business systems. We designated DOD’s business systems modernization program as high risk in 1995. Since then, we have made scores of recommendations aimed at strengthening DOD’s institutional approach to modernizing its business systems, and reducing the risks associated with key business system investments. In addition, in recent legislation, Congress included provisions that are consistent with our recommendations, such as in the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005. In response, the department has taken, or is taking, important actions to implement both our recommendations and the legislative requirements and as a result has made noteworthy progress on some fronts in establishing corporate management controls, such as developing a corporate-level BEA, including an ETP, establishing corporate investment management structures and processes, increasing business system life cycle management discipline and leveraging highly skilled staff on its largest business system investments. However, much more remains to be accomplished to address this high-risk area, particularly with respect to ensuring that effective corporate approaches and controls are extended to and employed within each of DOD’s component organizations (military departments and defense agencies). To this end, our recent work has highlighted challenges that the department still faces in “federating” (i.e., extending) its corporate BEA to its component organizations’ architectures, ensuring that the scope and content of the department’s business systems transition plan addresses DOD’s complete portfolio of IT investments, as well as establishing institutional structures and processes for selecting, controlling, and evaluating business systems investments within each component organization. Beyond this, ensuring that effective system acquisition management controls are actually implemented on each business system investment also remains a formidable challenge, as our recent reports on management weaknesses associated with individual programs have disclosed. Among other things, these reports have identified program- level weaknesses relative to architecture alignment, economic justification, performance management, requirements management, and testing. In May 2007, we reported on DOD’s efforts to address a number of provisions in the Fiscal Year 2005 National Defense Authorization Act. Among other things, we stated that the department had adopted an incremental strategy for developing and implementing its architecture, including the transition plan, which was consistent with our prior recommendation and a best practice. We further stated that DOD had addressed a number of the limitations in prior versions of its architecture. However, we also reported that additional steps were needed. Examples of these improvements and remaining issues with the BEA and the ETP are summarized below: The latest version of the BEA contained enterprise-level information about DOD’s “As Is” architectural environment to support business capability gap analyses. As we previously reported, such gap analyses between the “As Is” and the “To Be” environments are essential for the development of a well-defined transition plan. The latest version included performance metrics for the business capabilities within enterprise priority areas, including actual performance relative to performance targets that are to be met. For example, currently 26 percent of DOD assets are reported by using formats that comply with the Department of the Treasury’s United States Standard General Ledger, as compared to a target of 100 percent. However, the architecture did not describe the actual baseline performance for operational activities, such as for the “Manage Audit and Oversight of Contractor” operational activity. As we have previously reported, performance models are an essential part of any architecture and having defined performance baselines to measure actual performance provides the means for knowing whether the intended mission value to be delivered by each business process is actually being realized. The latest version identified activities performed at each location/organization and indicates which organizations are or will be involved in each activity. We previously reported that prior versions did not address the locations where specified activities are to occur and that doing so is important because the cost and performance of implemented business operations and technology solutions are affected by the location and therefore need to be examined, assessed, and decided on in an enterprise context rather than in a piecemeal, systems-specific fashion. The March 2007 ETP continued to identify more systems and initiatives that are to fill business capability gaps and address DOD-wide and component business priorities, and it continues to provide a range of information for each system and initiative in the plan (e.g., budget information, performance metrics, and milestones). However, this version still does not include system investment information for all the defense agencies and combatant commands. Moreover, the plan does not sequence the planned investments based on a range of relevant factors, such as technology opportunities, marketplace trends, institutional system development and acquisition capabilities, legacy and new system dependencies and life expectancies, and the projected value of competing investments. According to DOD officials, they intend to address such limitations in future versions of the transition plan as part of their plans for addressing our prior recommendations. In September 2007, DOD released an updated version of the plan which, according to DOD, continues to provide time-phased milestones, performance metrics, and statement of resource needs for new and existing systems that are part of the BEA and component architectures, and includes a schedule for terminating old systems and replacing them with newer, improved enterprise solutions. As we have also reported, the latest version of the BEA continues to represent the thin layer of DOD-wide corporate architectural policies, capabilities, rules, and standards. Having this layer is essential to a well- defined federated architecture, but it alone does not provide the total federated family of DOD parent and subsidiary architectures for the business mission area that are needed to comply with the act. The latest version had yet to be augmented by the DOD component organizations’ subsidiary architectures, which are necessary to meeting statutory requirements and the department’s goal of having a federated family of architectures. Under the department’s tiered accountability approach, the corporate BEA focuses on providing tangible outcomes for a limited set of enterprise-level (DOD-wide) priorities, while the components are to define and implement their respective component-level architectures that are aligned with the corporate BEA. However, we previously reported that well-defined architectures did not yet exist for the military departments, which constitute the largest members of the federation, and the strategy that the department had developed for federating its BEA needed more definition to be executable. In particular, we reported in 2006, that none of the three military departments had fully developed architecture products that describe their respective target architectural environments and developed transition plans for migrating to a target environment, and none was employing the full range of architecture management structures, processes, and controls provided for in relevant guidance. Also, we reported that the federation strategy did not address, among other things, how the component architectures will be aligned with the latest version of the BEA and how it will identify and provide for reuse of common applications and systems across the department. According to DOD, subsequent releases of the BEA will continue to reflect this federated approach and will define enforceable interfaces to ensure interoperability and information flow to support decision making at the appropriate level. To help ensure this, the BTA plans to have its BEA independent verification and validation contractor examine architecture federation when evaluating subsequent BEA releases. Use of an independent verification and validation agent is an architecture management best practice for identifying architecture strengths and weaknesses. Through the use of such an agent, department and congressional oversight bodies can gain information that they need to better ensure that DOD’s family of architectures and associated transition plan(s) satisfy key quality parameters, such as completeness, consistency, understandability, and usability, which the department’s annual reports have yet to include. We made recommendations aimed at improving the management and content of the military departments’ respective architectures; ensuring that DOD’s federated BEA provides a more sufficient frame of reference to guide and constrain DOD-wide system investments; and facilitating congressional oversight and promoting departmental accountability through the assessment of the completeness, consistency, understandability, and usability of its federated family business mission area architectures. DOD agreed with these recommendations and has since taken some actions, such as developing an updated version of its federation strategy, which according to DOD officials, addresses some of our recommendations. We have ongoing work for this Subcommittee on the military departments’ architecture programs, and plan to issue a report in early May 2008. The department has established and has begun to implement legislatively directed corporate investment review structures and processes needed to effectively manage its business system investments, but it has yet to do so in a manner that is fully consistent with relevant guidance, both at a corporate and component level. To its credit, the department has, for example, established an enterprisewide investment board (Defense Business Systems Management Committee (DBSMC)) and subordinate boards (investment review boards (IRB)) that are responsible for business systems investment governance, documented policies and procedures for ensuring that systems support ongoing and future business needs through alignment with the BEA, and assigned responsibility for ensuring that the information collected about projects meets the needs of DOD’s investment review structures and processes. However, the department has not developed the full range of project- and portfolio-level policies and procedures needed for effective investment management. For example, policies and procedures do not outline how the DBSMC and IRB investment review processes are to be coordinated with other decision-support processes used at DOD, such as the Joint Capabilities Integration and Development System; the Planning, Programming, Budgeting, and Execution system; and the Defense Acquisition System. Without clear linkages among these processes, inconsistent and uninformed decision making may result. Furthermore, without considering component and corporate budget constraints and opportunities, the IRBs risk making investment decisions that do not effectively consider the relative merits of various projects and systems when funding limitations exist. Examples of other limitations include not having policies and procedures for (1) specifying how the full range of cost, schedule, and benefit data accessible by the IRBs are to be used in making selection decisions; (2) providing sufficient oversight and visibility into component-level investment management activities, including component reviews of systems in operations and maintenance; (3) defining the criteria to be used for making portfolio selection decisions; (4) creating the portfolio of business system investments; (5) evaluating the performance of portfolio investments; and (6) conducting post implementation reviews of these investments. According to best practices, adequately documenting both the policies and the associated procedures that govern how an organization manages its IT investment portfolio(s) is important because doing so provides the basis for having rigor, discipline, and repeatability in how investments are selected and controlled across the entire organization. Accordingly, we made recommendations aimed at improving the department’s ability to better manage the billions of dollars it invests annually in its business systems and DOD largely agreed with these recommendations but added that while it intends to improve departmental policies and procedures for business system investments, each component is responsible for developing and executing investment management policies and procedures needed to manage the business systems under its tier of responsibility. According to DOD’s tiered accountability approach, responsibility and accountability for business investment management is tiered, meaning that it is allocated between the DOD corporate level (i.e., Office of the Secretary of Defense) and the components based on the amount of development/modernization funding involved and the investment’s designated tier. However, as our recent reports show the military departments also have yet to fully develop many of the related policies and procedures needed to execute both project-level and portfolio-level practices called for in relevant guidance for their tier of responsibility. For example, they have developed procedures for identifying and collecting information about their business systems to support investment selection and control, and assigned responsibility for ensuring that the information collected during project identification meets the needs of the investment management process. However, they have yet, for example, to fully document business systems investment policies and procedures for overseeing the management of IT projects and systems and for developing and maintaining complete business systems investment portfolio(s). Specifically, policies and procedures do not specify the processes for decision making during project oversight and do not describe how corrective actions should be taken when the project deviates or varies from the project management plan. Without such policies and procedures, the agency risks investing in systems that are duplicative, stovepiped, nonintegrated, and unnecessarily costly to manage, maintain, and operate. Accordingly, we made recommendations aimed at strengthening the military departments’ business systems management capability, and they largely agreed with these recommendations. Department officials stated that they are aware of the absence of documented policies and procedures in certain areas of project and portfolio-level management, and are currently working on new guidance to address these areas. Until DOD fully defines departmentwide and component-level policies and procedures for both individual projects and portfolios of projects, it risks selecting and controlling these business systems investments in an inconsistent, incomplete, and ad hoc manner, which in turn reduces the chances that these investments will meet mission needs in the most cost- effective manner. The department has recently undertaken several initiatives to strengthen business system investment management. For example, it has drafted and intends to shortly begin implementing a new Business Capability Lifecycle approach that is to consolidate management of business system requirements, acquisition, and compliance with architecture disciplines into a single governance process. Further, it has established an Enterprise Integration Directorate in the Business Transformation Agency to support the implementation of enterprise resource planning systems by ensuring that best practices are leveraged and BEA-related business rules and standards are adopted. Beyond establishing the above discussed institutional modernization management controls, such as the BEA, portfolio-based investment management, and system life cycle discipline, the more formidable challenge facing DOD is how well it can implement these and other management controls on each and every business system investment and information technology services outsourcing program. In this regard, we have continued to identify program-specific weaknesses as summarized below. With respect to taking an architecture-centric and portfolio-based approach to investing in programs, for example, we recently reported that the Army’s approach for investing about $5 billion over the next several years in its General Fund Enterprise Business System, Global Combat Support System-Army Field/Tactical, and Logistics Modernization Program (LMP) did not include alignment with Army enterprise architecture or use of a portfolio-based business system investment review process. Moreover, we reported that the Army did not have reliable processes, such as an independent verification and validation function, or analyses, such as economic analyses, to support its management of these programs. We concluded that until the Army adopts a business system investment management approach that provides for reviewing groups of systems and making enterprise decisions on how these groups will collectively interoperate to provide a desired capability, it runs the risk of investing significant resources in business systems that do not provide the desired functionality and efficiency. With respect to providing DOD oversight organizations with reliable program performance and progress information, we recently reported that the Navy’s approach for investing in both system and information technology services, such as the Naval Tactical Command Support System (NTCSS) and Navy Marine Corps Intranet (NMCI), had not always met this goal. For NTCSS, we reported that, for example, earned value management, which is a means for determining and disclosing actual performance against budget and schedule estimates, and revising estimates based on performance to date, had not been implemented effectively. We also reported that complete and current reporting of NTCSS progress and problems in meeting cost, schedule, and performance goals had not occurred, leaving oversight entities without the information needed to mitigate risks, address problems, and take corrective action. We concluded that without this information, the Navy cannot determine whether NTCSS, as it was defined and was being developed, was the right solution to meet its strategic business and technological needs. For NMCI, we reported that performance management practices, to include measurement of progress against strategic program goals and reporting to key decision makers on performance against strategic goals and other important program aspects, such as examining service-level agreement satisfaction from multiple vantage points and ensuring customer satisfaction, had not been adequate. We concluded that without a full and accurate picture of program performance, the risk of inadequately informing important NMCI investment management decisions was increased. Given the program-specific weaknesses that our work has and continues to reveal, it is important for DOD leadership and the Congress to have clear visibility into the performance and progress of the department’s major business system investments. Accordingly, we support the provisions in section 816 of the John Warner National Defense Authorization Act for Fiscal Year 2007 that provide for greater disclosure of business system investment performance to both department and congressional oversight entities, and thus increased accountability for results. More specifically, the legislation establishes certain reporting and oversight requirements for the acquisition of major automated information systems (MAIS) that fail to meet cost, schedule, or performance criteria. In general, a MAIS is a major DOD IT program that is not embedded in a weapon system (e.g., a business system investment). Going forward, the challenge facing the department will be to ensure that these legislative provisions are effectively implemented. To the extent that they are, DOD business systems modernization transparency, oversight, accountability, and results should improve. We currently have ongoing work for this subcommittee looking at the military departments implementation of a broad range of acquisition management controls, such as architectural alignment, economic justification, and requirements management, on selected business systems at the Departments of the Air Force and Navy. DOD has taken steps toward developing and implementing a framework for addressing the department’s long-standing financial management weaknesses and improving its capability to provide timely, reliable, and relevant financial information for analysis, decisionmaking, and reporting, a key defense transformation priority. Specifically, this framework, which is discussed in both the department’s ETP and the FIAR Plan is intended to define and put into practice a standard DOD-wide financial management data structure as well as enterprise-level capabilities to facilitate reporting and comparison of financial data across the department. While these efforts should improve the consistency and comparability of DOD’s financial reports, a great deal of work remains before the financial management capabilities of DOD and its components are transformed and the department achieves financial visibility. Examples of work remaining that must be completed as part of DOD component efforts to support the FIAR Plan and ETP include data cleansing; improvements in current policies, processes, procedures, and controls; and implementation of integrated systems. We also note DOD has other financial management initiatives underway, including efforts to move toward performance-based budgeting and to continually improve the reliability of Global War on Terrorism cost reporting. In 2007, DOD also introduced refinements to its approach for achieving financial statement auditability. While these refinements reflect a clearer understanding of the importance of the sustainability of financial management improvements and the department’s reliance on the successful completion of component (including military services and defense agencies) and subordinate initiatives, they are not without risk. Given the department’s dependency on the efforts of its components to address DOD’s financial management weaknesses, it is imperative that DOD ensure the sufficiency and reliability of (1) corrective actions taken by DOD components to support management attestations as to the reliability of reported financial information; (2) activities taken by DOD components and other initiatives to ensure that corrective actions are directed at supporting improved financial visibility capabilities, beyond providing information primarily for financial statement reporting, and are sustained until a financial statement audit can be performed; and (3) accomplishments and progress reported by DOD components and initiatives. Successful financial transformation of DOD’s financial operations will require a multifaceted, cross-organizational approach that addresses the contribution and alignment of key elements, including strategic plans, people, processes, and technology. DOD uses two key plans, the DOD ETP and the FIAR Plan, to guide transformation of its financial management operations. The ETP focuses on delivering improved capabilities, including financial management, through the deployment of system solutions that comply with DOD and component enterprise architectures. The FIAR Plan focuses on implementing audit-ready financial processes and practices through ongoing and planned efforts to address policy issues, modify financial and business processes, strengthen internal controls, and ensure that new system solutions support the preparation and reporting of auditable financial statements. Both plans recognize that while successful enterprise resource planning system implementations are catalysts for changing organizational structures, improving workflow through business process reengineering, strengthening internal controls, and resolving material weaknesses, improvements can only be achieved through the involvement of business process owners, including financial managers, in defining and articulating their operational needs and requirements and incorporating them, as appropriate, into DOD and component business enterprise architectures. DOD officials have acknowledged that integration between the two initiatives is a continually evolving process. For example, the June 2006 FIAR Plan update stated that some of the department’s initial subordinate plans included only limited integration with Business Transformation Agency initiatives and solutions. According to DOD officials, the use of end-to-end business processes (as provided by its segment approach) to identify and address financial management deficiencies will lead to further integration between the FIAR Plan and ETP. Two key transformation efforts that reflect an integrated approach toward improving DOD’s financial management capabilities are the Standard Financial Information Structure (SFIS) and the Business Enterprise Information System (BEIS), both of which are discussed in DOD’s ETP and FIAR Plan. SFIS. Key limitations in the department’s ability to consistently provide timely, reliable, accurate, and relevant information for analysis, decisionmaking, and reporting are (1) its lack of a standard financial management data structure and (2) a reliance on numerous nonautomated data transfers (manual data calls) to accumulate and report financial transactions. In fiscal year 2006, DOD took an important first step toward addressing these weaknesses through publication of its SFIS Phase I data elements and their subsequent incorporation into the DOD BEA. In March 2007, the department issued a checklist for use by DOD components in evaluating their systems for SFIS compliance. SFIS is intended to provide uniformity throughout DOD in reporting on the results of operations, allowing for greater comparability of information. While the first phase of SFIS was focused on financial statement generation, subsequent SFIS phases are intended to provide a standardized financial information structure to facilitate improved cost accounting, analysis, and reporting. According to DOD officials, the department has adopted a two-tiered approach to implement the SFIS data structure. Furthermore, they stated that SFIS is a mandatory data structure that will be embedded into every new financial management system, including enterprise resource planning systems, such as the Army’s General Fund Enterprise Business System and the Air Force’s Defense Enterprise Accounting and Management System (DEAMS). Further, recognizing that many of the current accounting systems will be replaced in the future, the department will utilize a common crosswalk to standardize the data reported by the legacy systems. BEIS. A second important step that the department took toward improving its capability to provide consistent and reliable financial information for decisionmaking and reporting was to initiate efforts to develop a DOD-level suite of services to provide financial reporting services, cash reporting, and reconciliation services. As an interim solution, financial information obtained from legacy component systems will be cross-walked from a component’s data structure into the SFIS format within BEIS. Newer or target systems, such as DEAMS, will have SFIS imbedded so that the data provided to BEIS will already be in the SFIS format. According to DOD’s September 2007 FIAR Plan update, the department prepared financial statement reports using SFIS data standards for the Marine Corps general and working capital funds, the Air Force general and working capital funds, and the Navy working capital funds. The department plans to implement SFIS-compliant reporting for the Army working capital funds, the Navy general funds, and its defense agencies in fiscal year 2008. The development and implementation of SFIS and BEIS are positive steps toward standardizing the department’s data structure and expanding its capability to access and utilize data for analysis, management decisionmaking, and reporting, including special reports related to the Global War on Terrorism. However, it is important to keep in mind that a great deal of work remains. In particular, data cleansing; improvements in policies, processes, procedures, and controls; as well as successful enterprise resource planning system implementations are needed before DOD components and the department fully achieve financial visibility. Our previous reviews of DOD system development efforts have identified instances in which the department faced difficulty in implementing systems on time, within budget, and with the intended capability. For example, as previously noted, the Army continues to struggle in its efforts to ensure that LMP will provide its intended capabilities. In particular, we reported that LMP would not provide the intended capabilities and benefits because of inadequate requirements management and system testing. Further, we found that the Army had not put into place an effective management process to help ensure that the problems with the system were resolved. Until the Army has completed action on our recommendations, it will continue to risk investing billions of dollars in business systems that do not provide the desired functionality or efficiency. In fiscal year 2007, DOD introduced key refinements to its strategy for achieving financial statement auditability. These refinements include the following: Requesting audits of entire financial statements rather than attempting to build upon audits of individual financial statement line items. Focusing on improvements in end-to-end business processes, or segments that underlie the amounts reported on the financial statements. Using audit readiness validations and annual verification reviews of segment improvements rather than financial statement line item audits to ensure sustainability of corrective actions and improvements. Forming a working group to begin auditability risk assessments of new financial and mixed systems, such as enterprise resource planning systems, at key decision points in their development and deployment life cycle to ensure that the systems include the processes and internal controls necessary to support repeatable production of auditable financial statements. To begin implementing its refined strategy for achieving financial statement auditability, DOD modified its business rules for achieving audit readiness to reflect the new approach. Recognizing that a period of time may pass before an entity’s financial statements are ready for audit, the revised business rules provide for an independent validation of improvements with an emphasis on sustaining improvements made through corrective actions. Sustainability of improvements will be verified by DOD components through annual internal control reviews, using OMB’s Circular No. A-123, Appendix A, as guidance. The department’s move to a segment approach provides greater flexibility in assessing its business processes and in taking corrective actions, if necessary, within defined areas or end-to-end business processes that individually or collectively supports financial accounting and reporting. However, DOD officials recognize that additional guidance is needed in several key areas. For example, DOD has acknowledged that it needs to establish a process to ensure the sufficiency of segment work in providing, individually or collectively, a basis for asserting the reliability of reported financial statement information. DOD officials indicated that they intend to provide additional guidance in this area by March 2008. Additionally, DOD officials acknowledged that a process is needed to ensure that DOD’s annual internal control reviews, including its OMB No. A-123, Appendix A reviews, are properly identifying and reporting on issues, and that appropriate corrective actions are taken when issues are identified during these reviews. To its credit, the department initiated the Check It Campaign in July 2006 to raise awareness throughout the department on the importance on effective internal controls. Ultimately, DOD’s success in addressing its financial management deficiencies, resolving the long-standing weaknesses that have kept it on GAO’s high-risk list for financial management, and finally achieving financial visibility will depend largely on how well its transformation efforts are integrated throughout the department. Both the ETP and FIAR Plan recognize that successful transformation of DOD’s business operations, including financial management, largely depends on successful implementation of enterprise resource planning systems and processes and other improvements occurring within DOD components. Such dependency, however, is not without risk. To its credit, DOD recently established a working group to begin auditability risk assessments of new financial and mixed systems, such as enterprise resource planning systems. The purpose of these planned assessments is to identify auditability risks that, if not mitigated during the development of the system, may impede the component’s ability to achieve clean audit opinions on its financial statements. Furthermore, the department has implemented and continually expands its use of a Web-based tool, referred to as the FIAR Planning Tool, to facilitate management, oversight, and reporting of departmental and component efforts. According to DOD officials, the tool is used to monitor progress toward achieving critical milestones identified for each focus area in component initiatives, such as financial improvement plans or accountability improvement plans, or departmentwide initiatives. Given that the FIAR Planning Tool is used to report results to OMB through quarterly update reports to the President’s Management Agenda and to update accomplishments in the FIAR Plan, it is critical that the FIAR Directorate ensure the reliability of reported progress. During a recent meeting with DOD officials, we discussed several areas where FIAR Plan reporting appeared incomplete. Our observations included the following. FIAR Plan updates, including the 2007 update, do not mention or include the results of audit reports and studies that may have occurred within an update period and how, if at all, any issues identified were addressed. For example, the DOD Inspector General has issued reports in recent years that raise concerns regarding the reliability of the military equipment valuation methodology and the usefulness of the valuation results for purposes beyond financial statement reporting. In 2007, the Air Force Audit Agency also issued reports expressing concerns regarding the reliability of reported military equipment values at Air Force. These audit reports and actions, if any, taken in response to them have not been mentioned to date in updates to the FIAR Plan. Further, although both the June and September 2006 FIAR Plan updates report that an internal verification and validation (IV&V) study was completed to test the military equipment valuation methodology, including completeness and existence of military equipment assets, neither of these reports disclosed the results of the review or corrective actions taken, if any. The absence of relevant audit reports or study results may mislead a reader into believing that no issues have been identified that if not addressed, may adversely affect the results of a particular effort, such as the department’s military equipment valuation initiative. For example, the IV&V study identified several improvements that were needed, in varying degrees, at all the military services and the Special Operations Command in the following areas: (1) documentation of waivers; (2) documentation of support for authorization, receipt, and payment; (3) estimated useful life; and (4) existence of the asset. In its conclusion statement, the IV&V study reported that if the weaknesses identified by the IV&V review are pervasive throughout DOD, the department will have a significant challenge to establish control over its resources and get its military equipment assets properly recorded for a financial statement audit. Recognition of audits and other reviews in the FIAR and subordinate plans would add integrity to reported accomplishments and further demonstrate the department’s commitment to transforming its financial management capabilities and achieving financial visibility. While the FIAR Plan clearly identifies its dependency on component efforts to achieve financial management improvements and clean financial statement audit opinions, it does not provide a clear understanding of further links or dependency between its subordinate plans, such as between the financial improvement plans, accountability improvement plans, and departmentwide initiatives, such as the military equipment valuation effort. For example, while the 2007 FIAR Plan updates indicate that Army, Navy, and Air Force developed accountability improvement plans that detail steps required for asserting audit readiness on military equipment, they do not clearly articulate the relationship of these plans to other plans, such as component financial improvement plans or the department’s plan to value military equipment. Clear linking of individual plans and initiatives is important to ensuring that efforts occurring at all levels within the department are directed at achieving improved financial visibility in the most efficient and effective manner. While we are encouraged by DOD’s efforts to implement capabilities that improve comparability of reported financial information, a significant amount of work remains before the department or its components have the capability to provide timely, reliable, and relevant information for all management operations and reporting. We caution the department that going forward it will be important to ensure that its financial management modernization efforts do not become compliance-driven activities resulting in little to no benefit to DOD managers. It is critical that the department ensure that its oversight, management, implementation, and reporting of transformation efforts and accomplishments are focused on the implementation of sustained improvements in DOD’s capability to provide immediate access to accurate and reliable financial information (planning, programming, budgeting, accounting, and cost information) in support of financial accountability and efficient and effective decision making throughout the department. Mr. Chairman and Members of the Subcommittee, this concludes my statement. I would be happy to answer any questions you may have at this time. For questions regarding this testimony, please contact Sharon L. Pickup at (202) 512-9619 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Organizational Transformation: Implementing Chief Operating Officer/Chief Management Officer Positions in Federal Agencies. GAO-08- 322T. Washington, D.C.: December 13, 2007. Business Systems Modernization: Air Force Needs to Fully Define Policies and Procedures for Institutionally Managing Investments. GAO- 08-52. Washington, D.C.: October 31, 2007. Business Systems Modernization: Department of the Navy Needs to Establish Management Structure and Fully Define Policies and Procedures for Institutionally Managing Investments. GAO-08-53. Washington, D.C.: October 31, 2007. Defense Business Transformation: A Full-time Chief Management Officer with a Term Appointment Is Needed at DOD to Maintain Continuity of Effort and Achieve Sustainable Success. GAO-08-132T. Washington, D.C.: October 16, 2007. Defense Business Transformation: Achieving Success Requires a Chief Management Officer to Provide Focus and Sustained Leadership. GAO- 07-1072. Washington, D.C.: September 5, 2007. DOD Business Transformation: Lack of an Integrated Strategy Puts the Army’s Asset Visibility System Investments at Risk. GAO-07-860 Washington, D.C.: July 27, 2007. DOD Business Systems Modernization: Progress Continues to Be Made in Establishing Corporate Management Controls, but Further Steps Are Needed. GAO-07-733. Washington, D.C.: May 14, 2007. Business Systems Modernization: DOD Needs to Fully Define Policies and Procedures for Institutionally Managing Investments. GAO-07-538. Washington, D.C.: May 11, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Information Technology: DOD Needs to Ensure That Navy Marine Corps Intranet Program Is Meeting Goals and Satisfying Customers. GAO-07- 51. Washington, D.C.: December 8, 2006. Defense Business Transformation: A Comprehensive Plan, Integrated Efforts, and Sustained Leadership Are Needed to Assure Success. GAO- 07-229T. Washington, D.C.: November 16, 2006. Enterprise Architecture: Leadership Remains Key to Establishing and Leveraging Architectures for Organizational Transformation. GAO-06- 831. Washington, D.C.: August 14, 2006. Department of Defense: Sustained Leadership Is Critical to Effective Financial and Business Management Transformation. GAO-06-1006T. Washington, D.C.: August 3, 2006. Business Systems Modernization: DOD Continues to Improve Institutional Approach, but Further Steps Needed. GAO-06-658. Washington, D.C.: May 15, 2006. DOD Systems Modernization: Planned Investment in the Navy Tactical Command Support System Needs to be Reassessed. GAO-06-215. Washington, D.C.: December 5, 2005. DOD Business Systems Modernization: Important Progress Made in Establishing Foundational Architecture Products and Investment Management Practices, but Much Work Remains. GAO-06-219. Washington, D.C.: November 23, 2005. Defense Management: Additional Actions Needed to Enhance DOD’s Risk-Based Approach for Making Resource Decisions. GAO-06-13. Washington, D.C.: November 15, 2005. Defense Management: Foundational Steps Being Taken to Manage DOD Business Systems Modernization, but Much Remains to be Accomplished to Effect True Business Transformation. GAO-06-234T. Washington, D.C.: November 9, 2005. 21st Century Challenges: Transforming Government to Meet Current and Emerging Challenges. GAO-05-830T. Washington, D.C.: July 13, 2005. DOD Business Transformation: Sustained Leadership Needed to Address Long-standing Financial and Business Management Problems. GAO-05- 723T. Washington, D.C.: June 8, 2005. Defense Management: Key Elements Needed to Successfully Transform DOD Business Operations. GAO-05-629T. Washington, D.C.: April 28, 2005. Information Technology Investment Management: A Framework for Assessing and Improving Process Maturity. GAO-04-394G. Washington, D.C.: March 2004. Information Technology: A Framework for Assessing and Improving Enterprise Architecture Management (Version 1.1). GAO-03-584G. Washington, D.C.: April 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Department of Defense (DOD) has stewardship over an unprecedented amount of taxpayer money--with about $546 billion in discretionary budget authority provided thus far in fiscal year 2008, and total reported obligations of about $492 billion to support ongoing operations and activities related to the Global War on Terrorism from September 11, 2001, through September 2007. Meanwhile, DOD is solely responsible for 8 high-risk areas identified by GAO and shares responsibility for another 7 high-risk areas. GAO designated DOD's approach to business transformation as high risk in 2005. DOD's business systems modernization and financial management have appeared on the list since 1995. Deficiencies in these areas adversely affect DOD's ability, among other things, to assess resource requirements; control costs; ensure accountability; measure performance; prevent waste, fraud, and abuse; and address pressing management issues. Based on previously issued GAO reports and testimonies, this testimony focuses on the progress DOD has made and the challenges that remain with respect to overall business transformation, business systems modernization, and financial management capabilities improvements. GAO has made recommendations to improve DOD's business transformation efforts and DOD's institutional and program-specific management controls. DOD has largely agreed with these recommendations. DOD's senior leadership has shown a commitment to transforming DOD's business operations and taken steps that have yielded progress in many respects, especially during the past two years. To sustain its efforts, DOD still needs (1) a strategic planning process and a comprehensive, integrated, and enterprisewide plan or set of plans to guide transformation and (2) a full-time, term-based, senior management official to provide focused and sustained leadership. Congress has clearly recognized the need for executive-level attention and, through the National Defense Authorization Act for fiscal year 2008, has designated the Deputy Secretary of Defense as DOD's Chief Management Officer (CMO), created a Deputy CMO position, and designated a CMO for each military department. Among other things, DOD will need to clearly define roles and responsibilities, accountability, and performance expectations. However, DOD still faces the challenge of ensuring that its CMO can give the position full-time focus and continuity of leadership. In that respect, GAO continues to believe the CMO should be codified in statute as a separate position with the appropriate term to span administrations. To comply with legislative requirements aimed at improving business systems modernization, DOD continues to update its business enterprise architecture and has established and begun to implement corporate investment review structures and processes. However, DOD has not achieved the full intent of the legislative requirements. The business enterprise architecture updates are not complete enough to effectively and efficiently guide and constrain business system investments across all levels of DOD. Although DOD issued a strategy for "federating" or extending its architecture to the DOD components, the components' architecture programs are not fully mature to support this. With respect to investment review structures and processes, DOD lacks policies and procedures for aligning investment selection decisions and relevant corporate- and component-level guidance. For example, DOD's business systems investment policies and procedures do not link investment selection decisions with investment funding decisions. Meanwhile, DOD components continue to invest billions of dollars in thousands of new and existing business system programs. DOD has taken steps towards developing and implementing a framework for improving its capability to provide timely, reliable, and relevant financial information for analysis, decisionmaking, and reporting. Specifically, DOD is defining and implementing a standard DOD-wide financial management data structure and enterprise-level capabilities to facilitate reporting and comparison of financial data across DOD. In 2007, DOD refined its strategy for achieving auditable financial statements, emphasizing verification and validation of sustained improvements and assessments of new systems to identify risks that, if not mitigated, may impede the achievement of clean financial statement audit opinions. While these efforts may improve the consistency and comparability of DOD's financial reports, a great deal of work to ensure the reliability of the data itself remains before financial management transformation will be achieved.
|
As table 1 shows, at the end of fiscal year 2012, USPS had $96.1 billion in unfunded benefits and other liabilities: $63.5 billion in unfunded liabilities for retiree health and pension benefits, as well as $15.0 billion in outstanding debt—the statutory limit—and $17.6 billion in workers’ compensation liabilities. These liabilities have become a large and growing financial burden, increasing from 83 percent of USPS revenues in fiscal year 2007 to 147 percent of revenues in fiscal year 2012. USPS’s dire financial condition makes paying for these liabilities highly challenging. In the short term, USPS is in a financial crisis, has reached its $15 billion statutory borrowing limit, and lacks liquidity to fund needed capital investment. USPS has not made legally-required payments of $11.1 billion to prefund retiree health benefits and does not expect to make its required prefunding payment of $5.6 billion due at the end of this month. In the long term, USPS will be challenged to pay for its liabilities on a smaller base of profitable First-Class Mail. First-Class Mail volume has declined 33 percent since it peaked in fiscal year 2001, and USPS projects this volume will continue declining through fiscal year 2020. The Postal Accountability and Enhancement Act (PAEA) established the Postal Service Retiree Health Benefits Fund (PSRHBF) and required USPS to begin prefunding health benefits for its current and future postal retirees, with annual payments of $5.4 billion to $5.8 billion from fiscal years 2007 through 2016. Subsequent USPS payments are required to be based on an actuarial approach to funding through fiscal years 2056 and beyond. USPS has made a total of $17.9 billion in prefunding payments since the prefunding schedule began in fiscal year 2007, with the most recent prefunding payment that was made being for $5.5 billion in fiscal year 2010; a total of $33.9 billion in required prefunding payments remain from the 10 years of fixed payments. USPS’s $5.5 billion retiree health benefit payment requirement that was originally due at the end of fiscal year 2011 was delayed until August 1, 2012. USPS missed that payment, as well as the $5.6 billion payment that was due by September 30, 2012. USPS has reported that due to a low level of available cash, it will be unable to make its $5.6 billion payment due by September 30, 2013. The required prefunding payments that USPS does not make are reported as outstanding liabilities in USPS’s financial statements. Looking forward, USPS has reported that low levels of liquidity will continue to exist, absent legislative actions by Congress. We have previously reported that Congress needs to modify USPS’s retiree health benefit payments in a fiscally responsible manner.we also stated that USPS should prefund any unfunded retiree health benefit liability to the maximum extent that its finances permit. Deferring funding for postal retiree health benefits could increase costs for future ratepayers and increase the risk that USPS may be unable to pay for these costs. Key considerations for Congress regarding the requirements for funding postal retiree health benefits include the following: Trade-offs regarding USPS’s current financial condition and long-term prospects: One of the rationales for prefunding retirement benefits is to protect the future viability of the organization by paying for retirement benefits as they are being earned, rather than after employees have already retired. However, USPS currently lacks liquidity and postal costs would need to decrease, or postal revenues to increase, or both, to fund required payments for prefunding retiree health benefits. To the extent prefunding payments are postponed, larger payments will be required later, when they likely would be supported by less First-Class Mail volume. No prefunding approach will be viable unless USPS can make the required payments. Fixed versus actuarially determined payments: The retiree health benefits payment schedule established under PAEA was significantly frontloaded, with total payment requirements through fiscal year 2016 that were significantly in excess of what actuarially determined amounts would be. Possible consequences to USPS employees and retirees: Funded benefits protect against an inability to make payments later, make promised benefits less vulnerable to cuts, and reduce the risk that employee pay and benefits may not be sustainable and could be reduced. Allocating costs between current and future ratepayers: Deferring payments until later can have the effect of passing costs from current to future postal ratepayers. One of the rationales for prefunding is for current ratepayers to pay for retiree health benefits being earned by current employees. The appropriate allocation of costs among different generations of postal ratepayers is complicated by what might be called the “legacy” unfunded liability that was not paid for in prior years. Funding targets: We have expressed concern about a proposed 80 percent funding target for postal retiree health benefits that would have the effect of carrying a permanent unfunded liability equal to roughly 20 percent of USPS’s liability, which could be a significant amount. If an 80 percent funding target were implemented because of concerns about USPS’s ability to achieve a 100 percent target level within a particular time frame, an additional policy option to consider could include a schedule to achieve 100 percent funding in a subsequent time period after the 80 percent level is achieved. We recently reported on USPS’s proposal to create a postal health care plan outside of FEHBP. This proposal would offer health care coverage to postal employees and retirees (about 1 million of whom currently participate in FEHBP).employees’ and retirees’ share of premiums would first be established by USPS and subsequently, for union-covered employees, in collective bargaining. Under this proposal, initial benefits and postal USPS’s proposed health care plan is designed to increase postal retirees’ enrollment in Medicare as well as take advantage of Medicare subsidies for employer-based prescription drug plans. Specifically, it is designed so that all Medicare-eligible retirees would enroll in Medicare Parts A and B, with Medicare acting as the primary insurer for those enrollees and the USPS plan paying costs above what Medicare would cover for current retirees. USPS’s proposed plan would include prescription drug benefits that qualify it to receive a federal subsidy and drug discounts under Medicare Part D. Under one USPS option for funding this plan, USPS proposes that it would be authorized to invest plan assets without the approval of the Secretary of the Treasury in non-Treasury securities, such as stocks and bonds, as well as commodities, foreign currency, and real property. (See app. I for more information on the USPS proposed health plan.) We have reported that USPS would likely realize large financial gains from its proposed health care plan. According to USPS’s estimates, USPS would reduce its health benefits expenses and eliminate its unfunded retiree health benefit liability, primarily by increasing the use of Medicare by postal retirees once they are eligible for Medicare generally beginning at age 65. USPS estimated that its plan would reduce its retiree health benefit liability by $54.6 billion, with the increased use of Medicare accounting for most of this reduction. USPS also estimated that the plan would reduce its total annual required health care payments by $7.8 billion in the first year of implementation and by $33.2 billion over the first 5 years of implementation. In addition, some of the elements of the proposal—notably an option to allow USPS to invest health plan funds outside of Treasury securities, such as in stocks, commodities, and foreign currency—would add uncertainties that could reduce funds available for its employees’ and retirees’ future health care. We have reported that under USPS’s health care plan as designed for year one, postal employees and retirees would have coverage for a similar package of services as under selected FEHBP plans, and the level of coverage would be similar for many services, with some exceptions (e.g., services received outside of USPS’s approved network). We estimated that, had the proposed USPS plan been implemented in 2013, most employees and retirees would have had similar or lower premiums compared to selected FEHBP plans; but total costs—premiums and costs for the use of care—could be higher for some. We have also reported that withdrawing the approximately 1 million USPS employees and retirees from FEHBP, which would be required under USPS’s proposal, would reduce FEHBP’s enrollment by an estimated 25 percent. Despite the significant change in enrollment, most nonpostal enrollees would likely not be affected by a USPS withdrawal beyond what selected FEHBP plan representatives expect to be small increases or decreases in premiums. However, USPS’s withdrawal could lead the small number of FEHBP plans with primarily postal enrollment to withdraw from the program. For example, if the 4 plans with 70 percent or more postal enrollment discontinue participation in FEHBP under a USPS withdrawal, an estimated 1 percent (about 29,000) of the approximately 3 million nonpostal enrollees in FEHBP would need to select a new health plan. As Congress considers proposals for a USPS health care plan, it should weigh the impact on Medicare as well as other issues, including establishing safeguards for plan assets and ensuring FEHBP-comparable protections for plan participants. The primary policy decision for Congress to make with respect to such proposals is whether to increase eligible postal retirees’ use of Medicare. USPS projects that its plan would increase Medicare spending by an average of $1.3 billion per year over the first 5 years of its plan—about 0.2 percent of Medicare’s annual costs of more than $550 billion. Our report also noted that Medicare is on a fiscally unsustainable path over the long term. A USPS health plan that would add costs to Medicare would have to be weighed alongside the fiscal pressure already faced by Medicare. If Congress decides to move forward with USPS’s proposed health plan as part of a broader reform package, we have identified other important policy issues that also should be addressed. Specifically, Congress should consider: safeguards for USPS health plan fund assets by placing appropriate constraints on their asset allocations (for example, limiting investments to Treasury securities, including inflation-indexed Treasury securities; or, to the extent that more risky assets are permitted, using a conservative approach to setting the prefunding discount rate); standards for the disposition of any surplus health plan assets that reduce the risk of a new unfunded liability emerging in the future (for example, standards for amortizing any surplus to mirror the amortization of any unfunded liability); designation of an independent entity responsible for selecting actuarial assumptions used to determine the health plan’s funded status; and protections for postal employees and retirees that are comparable to those under FEHBP, including a formula for retirees’ contribution to health costs. USPS had an estimated FERS surplus of $3.0 billion at the end of fiscal year 2012, according to the Office of Personnel Management (OPM), and OPM will be calculating an updated estimate for the end of fiscal year 2013. USPS has reported an estimate that its FERS surplus would have been substantially larger if its FERS liability had been estimated using postal-specific demographic and pay increase assumptions. USPS has stated that it believes, as a matter of equity, its FERS liability should be estimated using USPS-specific pay increase and demographic assumptions instead of government-wide pay increase and demographic assumptions. As we have reported, legislation would be needed to return to USPS any FERS surplus. As we also have reported, we would support a remedy to the asymmetric treatment of FERS surpluses and deficits under current law. conservative approach to permit USPS to access any FERS surplus would be to use it to reduce USPS’s annual FERS contribution by amortizing the surplus over 30 years (which would mirror the legally required treatment of deficits). A second approach would be to reduce USPS’s annual FERS contribution by offsetting it against the full amount of surplus each year until the surplus is used up; this would be comparable to what occurs for private-sector pension plans. We have previously suggested that any return of the entire surplus all at once should be done with care. A one-time-only return of the entire surplus should be considered as a one-time exigent action only as part of a larger package of reforms and restructurings. Otherwise, returning surpluses whenever they develop would likely eventually result in an unfunded liability. Key issues for Congress to consider in connection with the potential USPS FERS surplus include: Fluctuations in estimated liabilities: Estimates of liabilities for retirement benefits contain a significant degree of uncertainty and can change over time. Whether to use USPS-specific assumptions to measure USPS’s FERS liability: We support using the most accurate numbers possible. If USPS-specific assumptions are used in estimating USPS’s FERS liability, we suggest that they also be used in estimating USPS’s CSRS and retiree health liabilities. We suggest that if USPS-specific assumptions are to be used, that the assumptions be recommended by an independent body (such as OPM’s Board of Actuaries). GAO, Responses to Questions for the Record; Committee on Homeland Security and Governmental Affairs, February 13, 2013, Hearing on "Solutions to the Crisis Facing the U.S. Postal Service" (Washington, D.C.: Apr. 5, 2013). Under current law, USPS must fund towards any FERS deficit but does not benefit from any FERS surplus. See GAO-12-146, p. 37. Other liabilities: While USPS had an estimated FERS surplus of $3.0 billion dollars at the end of fiscal year 2012, it had an estimated CSRS deficit of $18.7 billion at the same time. Both estimates could change if USPS-specific assumptions are used. In addition, as noted earlier in Table 1, USPS also has an unfunded liability for retiree health benefits and liabilities for workers’ compensation and its debt to Treasury, for a total of $96.1 billion of liabilities and unfunded benefit liabilities. __________________________________________________________ In closing, we continue to believe that a comprehensive package of legislative actions is needed so that USPS can achieve financial viability and assure adequate benefits funding for more than 1 million postal employees and retirees. Chairman Carper, Ranking Member Coburn, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information about this statement, please contact Frank Todisco, Chief Actuary, FSA, MAAA, EA, Applied Research and Methods, at (202) 512-2834 or [email protected]; or John E. Dicken, Director, Health Care at (202) 512-7114 or [email protected]. Mr. Todisco meets the qualification standards of the American Academy of Actuaries to render the actuarial opinions contained in this testimony regarding the measurement issues, funding issues, and risks associated with pension and retiree health care obligations. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. In addition to the contacts named above, Lorelei St. James, Director, Physical Infrastructure Issues; Teresa Anderson; Susan Barnidge; Kenneth John; Jaclyn Nelson; Kristi Peterson; Steve Robblee; Friendly Vang-Johnson; Betsey Ward-Jenks; and Crystal Wesco made important contributions to this statement. Appendix I: Key Elements of the Proposed U.S. Postal Service Health Care Plan Description The U.S. Postal Service’s (USPS) plan is designed so that all Medicare-eligible retirees would enroll in Medicare Parts A and B. Those who do not enroll would have their benefits reduced by the amount Medicare would have paid had the individual enrolled. As is the case under the Federal Employees Health Benefits Program (FEHBP), Medicare would serve as the primary payer of health care costs, and for existing retirees, the USPS plan would cover 100 percent of any costs above the amounts covered by Medicare Parts A and B. USPS would design the prescription drug benefits under its plan to qualify as an Employer Group Waiver Plan (EGWP) under Medicare Part D. This would qualify USPS to receive—for those eligible for Medicare—a federal subsidy from the Medicare program for drug costs, a discount from pharmaceutical companies for brand name drugs, and federal coverage for drug costs above a catastrophic limit. Introduction of two new tiers of coverage The proposed plan would include four coverage tiers: self only, self and spouse, self and children, and self and family. Premiums for each tier would be set according to the claims experience of the group. USPS’s proposed addition of two tiers (self and spouse and self and children) to the options available under FEHBP would be to reflect the various stages of family status that participants may experience over the course of their career and retirement. USPS would contract with a single vendor to administer its plan; that vendor would negotiate with providers for payment rates, process claims, and provide wellness and disease management programs. For employees retiring a year or more after the USPS plan becomes effective (i.e., “future retirees”), Medicare would be the primary insurer once the enrollee becomes eligible, and the USPS plan would cover costs above what Medicare pays. However, future retirees would be required to meet the USPS plan deductible and pay coinsurance. Under one option of USPS’s proposal (which USPS refers to as “Scenario 2”),a its health plan would be financed by a newly created fund called the Health Benefits Fund (HBF) that would include: (1) the entire Postal Service Retiree Health Benefits Fund that would be abolished upon its transfer to the HBF; (2) the balance of the fund that finances FEHBP allocable to contributions by USPS, postal employees and retirees, including any reserve fund amounts; (3) USPS contributions under its health plan; (4) contributions of postal employees and retirees under the USPS health plan; (5) any interest on HBF investments; (6) any other USPS receipts allocable to the USPS health benefits plan; and (7) appropriations based on the service of officers and employees of the former U.S. Post Office Department. Under its proposal, USPS would be authorized to invest HBF assets without the approval of the Secretary of the Treasury in non-Treasury securities, such as stocks and bonds, as well as commodities, foreign currency, and real property. Also, USPS indicates that HBF assets could not be used for (1) loans to USPS, such as to enable it to remain solvent; (2) USPS payments to the federal government, such as for pensions and workers’ compensation; or (3) to finance USPS investments or USPS nonpostal initiatives to generate revenue. In any year when the amount of HBF assets exceeds USPS’s estimated actuarial liability for retiree health benefits, USPS could authorize the surplus amount to be transferred to the Postal Service Fund that finances most USPS expenses, provided that such authorization is made pursuant to a recommendation by a majority of the Committee overseeing the HBF. Description USPS would continue to prefund retiree health benefits, but under a purely actuarial approach, without the fixed (non-actuarial) payments required under current law through 2016. Payments to the HBF would be based on the “normal cost” and amortization of any unfunded liability or surplus of its retiree health care liabilities over an amortization period ending at the later of 40 years after implementation or 15 years from the then-current year. USPS has also proposed another option for financing its health plan (which USPS refers to as “Scenario 1”) under which retiree health benefits would continue to be financed from the Postal Service Retiree Health Benefits Fund (PSRHBF). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
USPS continues to be in a serious financial crisis, with little liquidity in the short term and a challenging financial outlook in the long term as profitable First-Class Mail volume continues to decline. Critical decisions by Congress are needed on postal reform legislation that has been proposed in both the U.S. Senate and the House of Representatives. Various proposals would restructure the financing of postal retiree health benefits, including required payments to prefund these benefits; enable USPS to introduce a new health plan for postal employees and retirees; and restructure the funding of postal pensions, including addressing a potential surplus in funding postal pensions under FERS. GAO has previously testified that a comprehensive package of legislative actions is needed so that USPS can achieve financial viability and assure adequate benefits funding for more than 1 million postal employees and retirees. GAO has also previously reported on various approaches Congress could consider to restructure the funding of USPS retiree health benefits and pensions. This testimony discusses (1) funding USPS retiree health benefits; (2) USPS's proposal to withdraw its employees and retirees from FEHBP and establish its own health plan; and (3) a potential surplus in funding postal pensions under FERS. This testimony is based primarily on GAO's past work. GAO has reported that Congress needs to modify the U.S. Postal Service's (USPS) retiree health benefit payments in a fiscally responsible manner. GAO also has reported that USPS should prefund any unfunded retiree health benefit liability to the maximum extent that its finances permit. Deferring funding for postal retiree health benefits could increase costs for future ratepayers and increase the risk that USPS may not be able to pay for these costs. Key considerations for funding postal retiree health benefits include: Trade-offs regarding USPS's current and long-term financial condition: One rationale for prefunding is to protect USPS's future viability by paying for retirement benefits as they are being earned. However, USPS currently lacks liquidity to fund required payments for prefunding retiree health benefits. To the extent prefunding is postponed, larger payments will be required later, when they likely would be supported by less First-Class Mail volume. No prefunding approach will be viable unless USPS can make the payments. Possible consequences to USPS employees and retirees: Fully funded benefits protect against an inability to make payments later and make promised benefits less vulnerable to cuts. Allocating costs between current and future ratepayers : Deferring payments can pass costs from current to future postal ratepayers. Allocating costs among different generations of ratepayers is complicated by the unfunded liability that was not paid for in prior years. Funding targets : An 80 percent funding target for postal retiree health benefits would effectively lead to a permanent unfunded liability of roughly 20 percent. An option could be to build in a schedule to achieve 100 percent funding in a later time period after the 80 percent level is achieved. GAO has reported that USPS would likely realize large financial gains from its proposal to withdraw its employees and retirees from the Federal Employees Health Benefits Program (FEHBP) and establish its own health plan. According to USPS's estimates, these financial gains would significantly reduce its health benefits expenses and eliminate its unfunded retiree health benefit liability--with increased use of Medicare by retirees comprising most of the projected liability reduction. USPS also has projected that its proposal will increase Medicare spending. As Congress considers proposals for a USPS health care plan, it should weigh the impact on Medicare, which also faces fiscal pressure, and other issues, including establishing safeguards for assets of the USPS health plan and ensuring protections for plan participants are comparable to those in FEHBP. GAO has also reported on key considerations regarding the release of any Federal Employees Retirement System (FERS) surplus to USPS. First, estimates of retirement benefits liabilities contain a significant degree of uncertainty and can change over time. Second, returning surpluses whenever they develop would likely result in an eventual unfunded liability. Alternative options to address funding surpluses include reducing USPS's annual FERS contribution either by amortizing the surplus over 30 years (which would mirror the treatment of deficits) or by offsetting the contribution against the full surplus each year until the surplus is used up.
|
Having nearly 28,000 potentially contaminated sites, the Department of Defense manages one of the world’s largest environmental cleanup programs. Under the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, contractors and other private parties may share liability for the cleanup costs at these sites. Two major types of sites that may involve such liability are government-owned, contractor-operated facilities (whose operators may be liable) and formerly used Defense sites (whose current and past owners and operators may be liable). The Defense Environmental Restoration Program Annual Report to the Congress is the primary reporting vehicle for the status of cleanup at the many sites for which DOD is either solely or partly responsible for the contamination. The report contains information on the status of cleanup at the sites, such as the amounts spent to date; future costs; and the stage of completion, among other data. In 1992, 1994, and 1997, we reported that the Department had inconsistent policies and practices for cleanup cost reimbursements to and/or recovery of cleanup costs from non-DOD parties responsible for contamination. We recommended that the Secretary of Defense provide guidance to resolve the inconsistencies. The guidance issued by the Department requires the components to pursue the recovery of cleanup costs of $50,000 or more and to include in the annual report to the Congress each site’s name and location, the recovery status, the amount recovered, and the cost of pursuing the recovery. Under the guidance, if a component determines that it is not in the best interests of the government to pursue a cost recovery, it must inform the Deputy Under Secretary of Defense for Environmental Security (now, the Deputy Under Secretary for Installations and Environment), who is responsible for compiling the annual report to Congress. The guidance does not define “cost recovery” or “cost sharing,” and does not address (1) how the costs of pursuing recovery should be determined; (2) whether data on cost recoveries should be reported by fiscal year, cumulatively, or both; and (3) what the procedures are for ensuring that the data are accurate, consistent, and complete. Because the Department’s management guidance is silent or unclear on key aspects of reporting necessary to collect, verify, and report data on cleanup cost recoveries, its report to Congress for fiscal year 1999 does not provide accurate, consistent, or complete data. Sound management practices require that organizations have clear and specific guidance regarding what data are to be collected and how they are to be reported, and the controls to ensure the accuracy and completeness of the reports. The reports should be useful to managers for controlling operations and to auditors and others for analyzing operations. While we note that the data reported in fiscal year 1999 were more extensive than those reported in 1998, the guidance issued by DOD does not provide sufficient detail to ensure the effective collection, verification, and reporting of data on cost recoveries. From fiscal year 1998 through fiscal year 1999, DOD reported that cost recoveries increased from $125.3 million to $421.5 million. (See table 1.) The reported increase in recoveries is incorrect because $250.4 million, over half of the $421.5 million reported as cost recoveries in 1999, was not the amount DOD recovered but the amount it spent on environmental cleanups conducted by other parties. For example, the Army Corps of Engineers reported that at Weldon Spring, Missouri, it had recovered $180.6 million. Supporting records, however, show the amount as the Corps’ share of costs for cleanup the Department of Energy is performing at the site. Corps officials told us they reported only the Corps’ share of cleanup costs at these sites because the guidance did not define “cost sharing.” In addition, these officials said they did not know what others spend on cleanup at the sites. (This is further discussed in the section on the data’s completeness.) The Corps of Engineers also incorrectly reported recoveries totaling about $70 million at other sites that were also its share of cleanup costs rather than recovered amounts. Additionally, there were other reporting inaccuracies. For example, two sites with ongoing recoveries—the Rocky Mountain Arsenal and the Massachusetts Military Reservation—that should have been reported by the Army in the fiscal year 1998 report were not reported until the following year. The reported recoveries at these two sites were $17.3 million and $28.2 million, respectively, and were not reported because the Army did not report cost sharing arrangements in fiscal year 1998. DOD’s guidance did not specify how to calculate the costs of pursuing recovery or whether components should report fiscal year data, cumulative data, or both. Consequently, the components’ reported data for both cost recoveries and the costs of pursuing recoveries were not consistent. Calculating the costs of pursuing recoveries has been particularly problematic. For example, although some costs, such as certain legal costs, are obviously related to efforts to recover costs, other legal costs, such as those incurred in defense against charges brought by states or counties, are not. Reported costs to pursue recovery for fiscal years 1998 and 1999 were $6.2 million and $37.3 million, respectively. In the absence of sufficient guidance, Defense components have varied in their reporting of cost recoveries and the costs to pursue recoveries: The Air Force estimated the costs of pursuing recoveries at one site and applied these same costs to other sites. It was also the only component that reported cost sharing arrangements with other federal agencies. The Navy said it did not keep records to allow it to capture the costs of pursuing recoveries in fiscal year 1998 and reported “unknown” or “to be determined” in fiscal year 1999. The Defense Logistics Agency reported $3.6 million in costs to pursue recoveries and $1.1 million in recovered amounts. Officials later determined that some of the reported costs, such as contract costs for investigating and cleaning up the site, should not have been included. Reporting entities have also been inconsistent in reporting data by fiscal year and cumulatively. For example, in the 1998 report, the Army used fiscal year data for cost recoveries and cumulative data for costs to pursue recoveries. The following year, it used fiscal year data for both. The Air Force and Defense Logistics Agency used cumulative data for recoveries and costs to pursue recoveries. The Navy used cumulative data for recoveries. Each of the methods for presenting data—cumulatively or by fiscal year— has certain drawbacks. Showing data cumulatively shows the long term progress that DOD has made in recovering costs, but it can also obscure instances in which no recoveries occurred in a given fiscal year. Conversely, data for the fiscal year do not show total recoveries at a given site. The environmental cleanup cost recovery data reported to Congress for fiscal year 1999 were more extensive than that reported in the previous fiscal year’s report primarily because the Corps of Engineers reported on cost sharing arrangements at 86 sites that it did not report in fiscal year 1998. The Army also reported on two additional sites in the report for fiscal year 1999. The Navy reported on one additional site, and the Air Force added one site but eliminated another. Despite the improvement, the Department still did not report all cost recoveries in the cost recovery appendix. In the absence of sufficient guidance, the Defense components have not reported all cost recoveries or costs to pursue recoveries: The body of the Department’s report includes a field for additional program information pertaining to each site. This field includes information such as progress in conducting investigations and contracts awarded for cleanup. Comments in the additional information field and other sections of the report indicated that cost recovery activities were occurring at sites that were not included in the cost recovery appendix. We identified 138 sites where cleanup costs exceeded the Department’s threshold for pursuing recoveries, and where there were indications that either cost recovery was being considered or that non-DOD parties were involved in cleanup. None of these sites were reported in the cost recovery appendix. Fifty-five of these sites were from the fiscal years 1998 and 1999 reports. For example, the groundwater cleanup at Bethpage Naval Weapons Industrial Reserve Plant, New York, involved Northrop/Grumman and the Occidental Chemical Company. Also, comments listed under the Army Tarheel Missile Plant, North Carolina, indicated that cost recovery would be requested from Lucent Technologies, a caretaker contractor at the installation. Neither, however, was included in the report’s cost recovery appendix. Failure to include these and other sites at which components may be recovering costs requires decisionmakers and others to search through over 800 pages of reported cleanup data to obtain a complete picture of cost recovery activities. The Defense components are required to report both the costs shared with non-DOD parties at the time of cleanup and the costs that they recovered from non-DOD parties after cleanup. However, the components did not report the amounts for some recoveries because they did not know how much money the non-DOD parties had contributed to cleanups resulting from cost sharing arrangements. The Department’s guidance does not include directions for obtaining, calculating, or estimating these amounts; and the components do not have adequate procedures to gather this information. As a result, for 88 sites listed in the fiscal year 1999 report, the amounts spent by non- DOD parties under cost sharing arrangements were not shown. (See table 1.) Although it is required, none of the DOD components provided the reasons for deciding not to pursue cost recoveries. According to DOD officials, some reasons for not pursuing recoveries include circumstances where there is insufficient evidence that non-DOD parties caused the problems at the site, where the other responsible party is no longer in business, or where pursuit of the recovery would cost more than the expected amounts recovered. The pursuit of recovery actions is a complex and lengthy process, and decisions to pursue cost recovery at some locations may take a long time. The cost recovery data in the Department’s annual environmental cleanup report for fiscal year 1999 are not useful to the Congress or the Department for management or oversight because they are inaccurate, inconsistent, and incomplete. The lack of sufficient guidance resulted in the Department’s overstating reported cost recoveries by $250 million, inconsistent reporting among the Defense components, and the failure to include all recoveries in the cost recovery appendix of the report. These problems limit the ability of the Congress and the Department to determine the extent to which recoveries may offset environmental cleanup costs. To ensure that the Congress and the Department of Defense have accurate, consistent, and complete information on cost recovery efforts, we recommend that the Secretary of Defense direct the Deputy Under Secretary of Defense for Installations and Environment to modify existing guidance in areas where it is silent or unclear and provide specific guidance for (1) defining the types of cost sharing arrangements that should be reported, (2) calculating the costs of pursuing recovery, (3) reporting both cumulative and fiscal year data, and (4) capturing and reporting amounts spent by non-DOD parties under cost sharing arrangements. The guidance should include control procedures for ensuring that the data reported by the Department’s components are accurate, consistent, and complete; identify all responsible parties; and include reasons for not pursuing recoveries. In official oral comments on a draft of this report from the Office of the Deputy Under Secretary of Defense (Installations and Environment), the Department concurred with our recommendations and plans to develop more accurate, consistent, and complete information on cost recovery data. In September 2001, after our report was submitted to the Department for comments, DOD issued revised management guidance that cited a number of actions that address our recommendations. If effectively implemented, the guidance should improve overall reporting of cost recovery data. The Department also noted that it was unable to verify the numbers in our report because we had obtained data that were not included in the fiscal year 1999 annual report. As noted in our report, we visited or obtained data directly from selected sites in order to validate the annual report data and found the data to be inaccurate, inconsistent, and incomplete. Accordingly, the noted discrepancies are part of the basis for our recommendations. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the appropriate congressional committees; the Secretaries of Defense, the Army, the Air Force, and the Navy; the Director of the Defense Logistics Agency; and the Director, Office of Management and Budget. We will also make copies available to others upon request. Please contact me on (202) 512-4412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix II. To determine whether the Department of Defense’s reporting of cost sharing and recovery data was accurate, consistent, and complete, we examined the relevant sections of the Department’s annual reports to Congress for fiscal years 1998 (Appendix F) and 1999 (Appendix E) and documentation on the Department’s and components’ reporting criteria and other policies. We compared reported data with data from other sources, including, for example, comments in other sections of DOD’s annual reports, supporting documents from selected locations, and our previous reports. We selectively reviewed supporting information for 100 of the 130 sites listed in DOD’s cost recovery report for fiscal year 1999. We selected the sites because reported recoveries exceeded $1 million, because we had identified cost recovery at those sites during earlier work and/or because our prior work revealed potential problems with data for these sites. We discussed the data with headquarters officials at the Departments of Defense, the Army, the Navy, and the Air Force and with the Defense Logistics Agency. In addition, we visited and/or obtained information directly from the following 12 cleanup sites: Rocky Mountain Arsenal, Colorado. Twin Cities Army Ammunition Plant, Arden Hills, Minnesota. Former Weldon Spring Ordnance Works, Weldon Spring, Missouri. Former Fort Devens, Massachusetts. Air Force Materiel Command and Wright-Patterson Air Force Base, Ohio. Naval Air Station, Whidbey Island, Washington. Navy Facilities Engineering Command, Poulsbo, Washington. Defense Supply Centers, Richmond, Virginia, and Philadelphia, Pennsylvania. Army Corps of Engineers, Kansas City, Missouri, and Omaha, Nebraska, Districts. To identify indications of possible responsible parties or cost recovery agreements, we reviewed the “additional program information” columns in printed annual reports for several fiscal years, including fiscal years 1998 and 1999. We used the latest available cost data from these reports to determine which sites had past and/or estimated costs of $50,000, the threshold level for DOD’s cost recovery requirements, and determined whether they had been reported in the cost recovery appendixes in fiscal years 1998 and 1999. There were 55 comments in other parts of the reports for fiscal years 1998 and 1999 that indicated the presence of potential responsible parties or that cost recovery was being considered or pursued. We conducted our review from August 2000 to August 2001 in accordance with generally accepted government auditing standards. In addition to those above, Robert Ackley, Arturo Holguin, and Tony Padilla made key contributions to this report.
|
The cleanup of contaminated Department of Defense (DOD) sites could cost billions of dollars. Private contractors or lessees that may have contributed to such contamination may also be responsible for cleanup costs. DOD and other responsible parties either agree to a cost sharing arrangement with the responsible parties conducting the cleanup or DOD conducts the cleanup and attempts to recover the other parties' share after the cleanup. On the basis of a GAO study, DOD issued guidance requiring its components to identify, investigate, and pursue cost recoveries and to report on them in the Defense Environmental Restoration Program Annual Report to Congress. The data on cost recoveries from non-Defense parties included in the Department's report for fiscal year 1999 were inaccurate, inconsistent, and incomplete. As a result, neither Congress nor DOD can determine the extent of progress made in recovering costs or the extent to which cost recoveries may offset cleanup costs. Data on cost recoveries included throughout the annual report were also missing from the appendix. Thus, DOD may not know whether all potential cost recoveries have been actively pursued and reported.
|
The Public Health Security and Bioterrorism Preparedness and Response Act of 2002 created the government’s Select Agent Regulations, dividing primary responsibility for regulatory control of select biological agents between HHS and USDA. While HHS is responsible for regulating select agents that can potentially pose a severe threat to public health and safety, USDA regulates select agents that can potentially pose a severe threat to animal and plant health or animal and plant products. A number of “overlap agents” can pose both a public health threat and a threat to animals; in these cases, labs must register with either agency, but are not required to register with both. As mentioned above, all five registered BSL- 4 labs in the United States are registered with DSAT. When a lab registers with DSAT to handle a select agent, a site-specific risk assessment must be conducted. Regulations governing the assessment do not specify who must perform it, meaning that the assessment can be performed by officials for the lab itself. Further, labs registering with DSAT are required to develop and implement a written security plan based on the site-specific risk assessment. According to the regulations, the security plan must be sufficient to safeguard against unauthorized theft, loss, or release of select agents and meet all the requirements outlined in the Select Agent Regulations. DSAT authored and utilizes the Select Agents and Toxins Security Information Document to provide possible practices and procedures that entities may use to assist them in developing and implementing their written security plans. Additional requirements include a written biosafety or biocontainment plan that describes the safety and containment procedures, and an incident response plan that includes procedures for theft, loss, or release of an agent or toxin; inventory discrepancies; security breaches; natural disasters; violence; and other emergencies. Prior to being issued a certificate of registration, an entity must comply with all security requirements and all other provisions of the Select Agent Regulations. A registration in the CDC’s Select Agent Program lasts for 3 years, after which it must be renewed if the entity chooses to retain possession of the select agents. In addition to the five registered and operational BSL-4 labs, there are more labs currently under construction or in the planning stages. While expansion is taking place within the federal sector as well—there are many new federal facilities currently under construction or planned, which have one or more BSL-4 labs—there are also BSL-4 labs at universities, as part of state response, and in the private sector. These new facilities have not completed the registration process and were not fully operational as BSL-4 labs at the time of our assessment. CDC regulations do not mandate that specific perimeter security controls are present at all BSL-4 labs, resulting in a significant difference in perimeter security between the nation’s five labs. According to the regulations, each lab must implement a security plan that is sufficient to safeguard select agents against unauthorized access, theft, loss, or release. However, there are no specific perimeter security controls that must be in place at every BSL-4 lab. While three labs had all or nearly all of the key security controls we assessed, two labs demonstrated a significant lack of these controls. The results of our perimeter physical security assessment of the five registered BSL-4 labs are presented in table 1. The check marks in the table indicate the presence of specific security features at the labs we assessed, illustrating the varying levels of perimeter physical security controls present at the labs. Although the presence of the security controls we assessed does not automatically ensure a secure perimeter, having most controls provides increased assurance that a strong perimeter security system is in place and reduces the likelihood of unauthorized intrusion. As discussed in appendix I, the strongest perimeter security systems use an active, integrated approach to security that takes advantage of multiple layers. For example, an active, integrated system links perimeter intrusion alarms to a CCTV network, allowing security officers to instantly view the location of an alarm. A discussion of each security assessment follows. Lab A: The physical security controls of Lab A presented a strong visible deterrent from the outside, with 14 of the 15 key security controls in place. Lab A was located in a complex of other buildings that was separated from an urban environment by a perimeter security fence reinforced with airline cable to further strengthen the fence and deter unauthorized access. A roving patrol of armed guards was visible inside and outside the perimeter fence, while other guards manned gated entry inspection points. The gates incorporated technical support for the guards to assist them with the inspection of both private and commercial vehicles. Guards conducted ID checks at the gates and searched vehicles that did not have the appropriate access decals. Further, all trucks were required to enter a single gate containing an X-ray screening device. Past this outer perimeter, a further man-made barrier existed around the building containing the BSL-4 lab. Although Lab A had most of the security controls we focused on during our assessment, it did not have an active intrusion detection system integrated with the CCTV network covering the facility. This reduced the possibility that security officers could detect and quickly identify an intruder entering the building perimeter. Lab B: Lab B was the only one of the five BSL-4 labs that had all 15 security controls. The lab was in an urban environment, but located in a complex of other buildings enclosed within an outer fenced perimeter. Roving patrols consisting of both armed security guards and local police walked on the exterior of the perimeter fence. The fence itself was reinforced with airline cable to further strengthen it along areas that bordered roads, serving to further protect against unauthorized intrusion from these public areas. There was a single gated inspection point to enter the complex manned by armed security guards. The inspection point incorporated technical support for the guards to assist them with the inspection of both private and commercial vehicles. Once inside the gate, man-made barriers and a natural (i.e., landscaped) barrier system stood between the gate and the lab itself. More armed guards conducted roving patrols inside the complex and guarded the entrance to the lab itself. Lab B also had a strong active integrated security system. According to lab officials, the system featured an integrated emergency management response whereby appropriate fire and rescue vehicles were automatically dispatched after an alarm. Lab C: Lab C utilized only 3 of the 15 key security controls we assessed. The lab was in an urban environment and publicly accessible, with only limited perimeter barriers. During our assessment, we saw a pedestrian access the building housing the lab through the unguarded loading dock entrance. In addition to lacking any perimeter barriers to prevent unauthorized individuals from approaching the lab, Lab C also lacked an active integrated security system. By not having a command and control center or an integrated security system with live camera monitoring, the possibility that security officers could detect an intruder entering the perimeter and respond to such an intrusion is greatly reduced. Lab D: Although Lab D did not have an armed guard presence outside the lab or vehicle screening, it presented strong physical security controls in all other respects, with 13 of the key 15 controls we assessed. Lab D was located within the interior of a complex of buildings, providing a natural system of layered perimeter barriers that included bollards for vehicle traffic. When combined with the presence of roving armed guard patrols, Lab D projected strong visible deterrents. It also utilized an active integrated security system so that if an alarm was activated, personnel within the command and control center could survey the alarm area though monitors and utilize pan/tilt/zoom cameras to further assess the alarm area. This permits security personnel to better coordinate and determine the appropriate response. Lab E: Lab E was one of the weakest labs we assessed, with 4 out of the 15 key controls. It had only limited camera coverage of the outer perimeter of the facility and the only vehicular barrier consisted of an arm gate that swung across the road. Although the guard houses controlling access to the facility were manned, they appeared antiquated. The security force charged with protecting the lab was unarmed. Of all the BSL-4 labs we assessed, this was the only lab with an exterior window that could provide direct access to the lab. In lieu of a command and control center, Lab E contracts with an outside company to monitor its alarm in an off-site facility. This potentially impedes response time by emergency responders with an unnecessary layer that would not exist with a command and control center. Since the contracted company is not physically present at the facility, it is not able to ascertain the nature of alarm activation. Furthermore, there is no interfaced security system between alarms and cameras and a lack of live monitoring of cameras. DSAT approved the security plans for the two labs lacking most key security controls, and approved these labs to participate in the Select Agent Program as BSL-4 labs. Conversely, during our assessment, we noted that the three BSL-4 labs with all or nearly all of our 15 key controls were subject to additional federal security requirements outside the purview of the Select Agent Regulations. For example, the National Institutes of Health both funds research requiring high containment and provides guidance and requirements that are widely used to govern many of the activities in high-containment labs. Other examples of more stringent regulations for BSL-4 labs include those of military labs that also follow far stricter Department of Defense physical security requirements. For example, Lab B had several layers of security, including a perimeter security fence and roving patrol of armed guards, visible inside and outside the perimeter fence. Although these security controls are not necessary for BSL-4 labs registering with DSAT, Lab B utilized these security controls to comply with more stringent federal requirements imposed by the agency owning the facility and incorporated these controls into its security plan. Security officials at the two labs with fewer security controls (Labs C and E) told us that management and administration had little incentive to improve security because they already met DSAT requirements. Some security officials also suggested that budgetary restrictions limited attempts to make security improvements. Although numerous factors influence the security of a facility, two of the BSL-4 labs we assessed were lacking key perimeter security controls even though they met DSAT requirements. Our observation that the three labs with strong perimeter security all were subject to additional federal oversight outside of the DSAT program leads us to conclude that minimum specific perimeter security standards would provide assurance that all BSL-4 labs are held to the same security standard. Given that many new BSL-4 labs are under construction and will come online over the next few years, it is important for DSAT to ensure that there is no “weak link” in security among the nation’s BSL-4 labs. To further enhance physical perimeter security at BSL-4 labs regulated by DSAT, we are recommending that the Director, CDC, take action to implement specific perimeter security controls for all BSL-4 labs to provide assurance that each lab has a strong perimeter security system in place. The CDC should work with USDA to coordinate its efforts, given that both agencies have the authority to regulate select agents. We received written comments on a draft of this report from the Assistant Secretary for Legislation of HHS. HHS agreed that perimeter security is an important deterrent against theft of select agents. They indicated that the difference in perimeter security at the five labs was the result of risk-based planning; however, they did not comment on the specific vulnerabilities we identified (e.g., an unsecured loading dock at one building housing a BSL-4 lab) and whether these should be addressed. In regard to requiring specific perimeter controls at all BSL-4 labs, HHS stated that it would coordinate with APHIS to seek input from physical security experts and the scientific community; the regulated community; professional associations; State, local, and tribal officials; and the general public as to the need and advisability of requiring, by Federal regulation, specific perimeter controls at each registered entity having a BSL-4 lab. They explained that specific security controls are not in place because Select Agent Regulations are focused on performance objectives rather than specific methods of compliance. We are encouraged that HHS plans to study this matter further, and suggest that, as part of this study, HHS reconsider whether the lack of many specific perimeter security controls at two of the nation’s five BSL-4 labs is acceptable. HHS also requested that we provide references for the research that identified our 15 security controls as being appropriate for the assessment of the perimeter security of BSL-4 labs, identify the security experts that we consulted, and indicate whether these 15 security controls had been peer reviewed. We have notified HHS that we will work with them to understand the controls in more detail. As discussed in our report, we developed the 15 security controls based on our expertise in performing security assessments and our research of commonly accepted physical security principles. These principles are reflected in the security survey tool we used to evaluate each of the five BSL-4 labs. We have used this survey tool for similar security assessments in the past. Although we acknowledge that the 15 security controls we selected are not the only measures that can be in place to provide perimeter security, we determined that these controls (discussed in more detail in app. I) represent a baseline for BSL-4 lab perimeter physical security and contribute to a strong perimeter security system. Many of these controls— such preventing direct access to a lab via windows, or ensuring visitors are screened prior to entering a building containing a BSL-4 lab—are common- sense security measures. HHS also provided us with technical comments, which we incorporated as appropriate. HHS’s comment letter is reprinted in appendix II. As agreed with your office, unless you announce the contents of this report earlier, we will not distribute it until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Director of the CDC, and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To perform our perimeter security assessment of biosafety level (BSL) 4 labs, we identified 15 key perimeter security controls, based on our expertise and research of commonly accepted physical security principles, that contribute to a strong perimeter security system. A strong perimeter security system utilizes layers of security to deter, detect, delay, and deny intruders. Deter. Physical security controls that deter an intruder are intended to reduce the intruder’s perception that an attack will be successful—an armed guard posted in front of a lab, for example. Detect. Controls that detect an intruder could include video cameras and alarm systems. They could also include roving guard patrols. Delay. Controls that delay an intruder increase the opportunity for a successful security response. These controls include barriers such as perimeter fences. Deny. Controls that can deny an intruder include visitor screening that only permits authorized individuals to access the building housing the lab. Furthermore, a lack of windows or other obvious means of accessing a lab is an effective denial mechanism. Some security controls serve multiple purposes. For example, a perimeter fence is a basic security feature that can deter, delay, and deny intruders. However, a perimeter fence on its own will not stop a determined intruder. This is why, in practice, layers of security must be integrated in order to provide the strongest protection. Thus, a perimeter fence should be combined with an intrusion detection system that would alert security officials if the perimeter has been breached. A strong system would then tie the intrusion detection alarm to the closed-circuit television (CCTV) network, allowing security officers to immediately identify intruders. A central command center is a key element for an integrated, active system. It allows security officers to monitor alarm and camera activity—and plan the security response—from a single location. Table 2 shows 15 physical security controls we focused on during our assessment work. In addition to the contact named above, the following individuals made contributions to this report: Andy O’Connell, Assistant Director; Verginie Amirkhanian; Randall Cole; John Cooney; Elizabeth Isom; Barbara Lewis; Jeffrey McDermott; and Andrew McIntosh.
|
Biosafety labs under the U.S. Bioterrorism Act are primarily regulated and must be registered with either the Centers for Disease Control and Prevention (CDC) or the U.S. Department of Agriculture (USDA) under the Select Agent Regulations. Currently, all operational biosafety level (BSL) 4 labs are registered with the CDC and thus are regulated by the CDC, not USDA. BSL-4 labs handle the world's most dangerous agents and diseases. In fact, of the four BSL designations, only BSL-4 labs can work with agents for which no cure or treatment exists. GAO was asked to perform a systematic security assessment of key perimeter security controls at the nation's five operational BSL-4 labs. To meet this objective, GAO performed a physical security assessment of the perimeter of each lab using a security survey it developed. GAO focused primarily on 15 physical security controls, based on GAO expertise and research of commonly accepted physical security principles. Select Agent Regulations do not mandate specific perimeter security controls that need to be in place at each BSL-4 lab, resulting in significant differences in perimeter security between the nation's five labs. While three labs had all or nearly all of the key security controls GAO assessed--features such as perimeter barriers, roving armed guard patrols, and magnetometers in use at lab entrances--two labs demonstrated a significant lack of these controls. Specifically, one lab had all 15 security controls in place, one had 14, and another had 13 of the key controls. However, the remaining two labs had only 4 and 3 key security controls, respectively. Although the presence of the security controls GAO assessed does not automatically ensure a secure perimeter, having most controls provides increased assurance that a strong perimeter security system is in place and reduces the likelihood of unauthorized intrusion. For example, the two labs with fewer security controls lacked both visible deterrents and a means to respond to intrusion. One lab even had a window that looked directly into the room where BSL-4 agents were handled. In addition to creating the perception of vulnerability, the lack of key security controls at these labs means that security officials have fewer opportunities to stop an intruder or attacker. The two labs with fewer security controls were approved by the CDC to participate in the Select Agent Program despite their weaknesses. During the course of our review, GAO noted that the three labs with all or nearly all of the key security controls GAO assessed were subject to additional federal security requirements imposed on them by agencies that owned or controlled the labs, not because of the Select Agent Regulations.
|
This section discusses EPA’s human health risk assessment and risk management practices and its processes for soliciting nominations and selecting chemicals for new and updated IRIS toxicity assessments. EPA’s ability to effectively implement its mission of protecting public health and the environment is critically dependent on credible and timely assessments of the risks posed by chemicals. Such assessments are the cornerstone of scientifically sound environmental decisions, policies, and regulations under a variety of statutes, such as the Safe Drinking Water Act, the Toxic Substances Control Act, and the Clean Air Act. EPA assesses the human health risks of chemicals using a model from the National Academies. This model includes four components: (1) hazard identification, (2) dose-response assessment, (3) exposure assessment, and (4) risk characterization (see fig. 1). For some, but not all chemicals, EPA conducts the first two sequential analyses of a human health risk assessment—that is, the hazard identification and dose-response assessment—under its IRIS Program. Taken together, these two steps are commonly referred to as IRIS toxicity assessments. EPA’s IRIS Program—managed by EPA’s National Center for Environmental Assessment within the Office of Research and Development—develops new IRIS toxicity assessments and updates existing IRIS toxicity assessments if revisions are warranted on the basis of newly published peer-reviewed studies. EPA program offices and regions combine information from IRIS toxicity assessments with the results from chemical exposure assessments to characterize risk, which provides information on the probability that the adverse effects described in hazard identification will occur under the conditions described in the exposure assessment. These four steps— hazard identification, dose-response assessment, exposure assessment, and risk characterization—comprise human health risk assessments, which EPA offices use to make risk management decisions. Risk management, as opposed to risk assessment, involves integrating the risk assessment information with other information—such as economic information on the costs and benefits of mitigating a risk, technological information on the feasibility of managing the risk, and the concerns of various stakeholders—to determine whether the health risks identified in a chemical risk assessment warrant EPA taking regulatory or other risk management actions. A typical IRIS toxicity assessment contains a qualitative hazard identification and quantitative dose-response assessment. The qualitative hazard identification identifies noncancer and cancer health effects that may be caused by exposure to a given chemical. For cancer effects, EPA qualitatively describes the carcinogenic potential of a chemical in a narrative that includes selecting a weight-of-evidence descriptor, ranging from “carcinogenic to humans” to “not likely to be carcinogenic to humans.” Following hazard identification, a dose-response assessment is conducted for both noncancer and cancer effects assuming adequate data are available. A quantitative dose-response assessment characterizes the quantitative relationship between the exposure to a chemical and the resultant health effects. The quantitative dose-response assessment relies on experimental data, primarily from either animal (toxicity) or human (epidemiology) studies. The noncancer dose-response assessment may include the following: an oral reference dose—an estimate (with uncertainty spanning perhaps an order of magnitude) of the daily oral exposure to a chemical that is likely to be without an appreciable risk of deleterious effects during a person’s lifetime—expressed in terms of milligrams per kilogram per day, and an inhalation reference concentration—an estimate (with uncertainty spanning perhaps an order of magnitude) of the continuous inhalation exposure to a chemical that is likely to be without an appreciable risk of deleterious effects during a person’s lifetime—expressed in terms of milligrams per cubic meter. According to EPA officials, the quantitative cancer dose-response assessment typically includes estimates of a chemical’s carcinogenic potency by both the oral and inhalation routes of exposure. For oral exposures, the “oral slope factor” is an estimated 95 percent upper bound on the increased cancer risk per increased unit of exposure (in mg/kg- day) to a chemical over a lifetime. For inhalation exposures, the “inhalation unit risk” is an estimated 95 percent upper bound on the increased cancer risk per increased unit of exposure (in µg/m in air) to a chemical over a lifetime. The toxicity values derived in both noncancer and cancer dose-response assessments—that is, the oral reference dose, inhalation reference concentration, oral slope factor, and inhalation unit risk—are often referred to as IRIS values. IRIS toxicity assessments estimate the potential health effects of lifelong (chronic) exposure to chemicals. According to the Office of Management and Budget (OMB), the IRIS Program is the only federal program that provides qualitative and quantitative assessments of both cancer risks and noncancer effects of chemicals. In addition, according to EPA’s Human Health Risk Assessment Strategic Research Action Plan, no other federal health assessment program has (1) a similar mission and scope or (2) internal and external peer review processes that are as rigorous. Specifically, the IRIS toxicity assessment process includes internal EPA review; two interagency reviews by other federal agencies and White House offices (e.g., OMB); public review and comment; and a rigorous, independent, external peer review. However, IRIS is not the only source of toxicity information available to EPA program offices and regions. For many chemicals, IRIS toxicity assessments are not available, applicable, or current; therefore, in some cases, EPA program offices and regions rely on toxicity information from other sources. Other sources include, but are not limited to the following: Toxicological Profiles and Minimal Risk Levels. The Agency for Toxic Substances and Disease Registry (ATSDR)—a federal public health agency of the U.S. Department of Health and Human Services— prepares Toxicological Profiles for hazardous substances in response to statutory requirements under the Comprehensive Environmental Response, Compensation, and Liability Act, commonly known as Superfund. ATSDR’s Toxicological Profiles typically evaluate three different exposure durations—acute (14 days or less), intermediate (15-364 days), and chronic (365 days or more). According to ATSDR’s website, during the development of toxicological profiles, if the agency determines that reliable and sufficient data exist to identify the specific health effects that result from exposure to a hazardous substance, the agency will derive Minimal Risk Levels. Minimal Risk Levels are an ATSDR estimate of daily human exposure to a hazardous substance at or below which that substance will likely not pose a measurable risk of adverse noncancerous effects—such as neurological, respiratory, and reproductive effects—over a specified time period. Minimal Risk Levels are substance-specific estimates, which according to ATSDR’s website are intended to serve as screening levels used by ATSDR health assessors and others to identify contaminants and potential health effects that may be of concern at hazardous waste sites. According to ATSDR’s website, for non-carcinogens, ATSDR adopted a practice similar to that of EPA’s oral reference dose and inhalation reference concentration. Unlike EPA’s IRIS Program, however, ATSDR does not develop quantitative cancer toxicity values. Provisional Peer Reviewed Toxicity Values (PPRTV). PPRTVs are toxicity values that EPA’s National Center for Environmental Assessment ordinarily prepares on an ongoing basis to support cleanup decisions at Superfund sites. PPRTVs are derived for chronic and subchronic exposure durations in instances where IRIS toxicity assessments are not available, and are sometimes derived for subchronic exposure durations when an IRIS toxicity assessment on chronic exposure exists. Also, while PPRTVs receive internal review by EPA scientists and external peer review by independent scientific experts, they differ from IRIS values in that they do not undergo the same rigorous process of peer review and public participation. California Environmental Protection Agency (Cal/EPA) Toxicity Assessments. Cal/EPA prepares toxicity assessments, which are peer-reviewed and provide quantitative values for both cancer and noncancer effects. According to Cal/EPA’s website, Cal/EPA’s Office of Environmental Health Hazard Assessment is responsible for developing and providing managers in state and local government agencies with toxicological and medical information relevant to managing risks and making decisions involving public health. The office develops procedures and practices for performing health risk assessments for those involved in environmental health issues, including policymakers, businesspeople, members of community groups, news reporters, and others with an interest in the potential health effects of toxic chemicals. Specifically, the office publishes “A Guide to Health Risk Assessment,” which outlines in a generalized form the four risk assessment steps in the National Academies model described above. EPA’s IRIS Program invites IRIS users to submit nominations for chemicals to be considered for new or updated IRIS toxicity assessments. The IRIS Program solicits nominations from EPA program offices and regions and other federal agencies by issuing a memorandum and solicits nominations from the public by publishing a solicitation in the Federal Register. Generally, the IRIS Program includes a list of criteria that the agency plans to use to prioritize chemicals for selection as part of the solicitation. The IRIS Program included the following criteria in its most recent 2011 nomination solicitation: (1) potential public health impact; (2) EPA statutory, regulatory, or program-specific implementation needs; (3) availability of new scientific information or methodology that might significantly change the current IRIS information; (4) interest to other governmental agencies or the public; (5) availability of other scientific assessment documents that could serve as a basis for development of an IRIS toxicity assessment; and (6) other factors such as widespread exposure to the chemical. After receiving nominations, IRIS Program staff conducts a preliminary literature search to determine whether there is sufficient information to develop toxicity values for the chemicals nominated. According to IRIS Program officials, the purpose of the literature search is to determine if there is sufficient scientific data from health studies that could be used to develop new IRIS toxicity assessments or update existing assessments. Following the preliminary literature search, the IRIS Program separates the nominated chemicals into two groups: (1) those for which health studies are available and could be used to develop or update an IRIS toxicity assessment and (2) those for which there are not enough data to develop an assessment. Next, according to IRIS Program officials, they provide EPA program offices and regions with an annotated list of chemical nominations that specifies the degree to which health studies are available for each chemical and ask for feedback regarding which chemicals are the highest priorities. After considering such feedback, the IRIS Program selects chemicals for new or updated IRIS toxicity assessments. The IRIS chemical nomination and selection process culminates in the publication in the Federal Register of the IRIS agenda— which contains, among other things, a list of chemicals for which the IRIS Program intends to initiate IRIS toxicity assessments, as well as a list of ongoing IRIS toxicity assessments. For example, the most recent IRIS agenda, published in May 2012, lists 15 IRIS toxicity assessments—with planned start dates ranging from fiscal year 2012 to fiscal year 2014—as well as a list of the 52 IRIS toxicity assessments that were already under way. According to the IRIS agenda, EPA considers its own resources and the availability of guidance, guidelines, and policy decisions in deciding when to start assessments for the selected chemicals. EPA has not done a recent evaluation of demand for IRIS toxicity assessments with input from users inside and outside the agency. EPA conducted its most recent evaluation of demand for IRIS toxicity assessments in September 2003, which included input from users inside and outside EPA, but it has not performed a similar review since that time. Without a clear understanding of current demand for IRIS toxicity assessments, EPA cannot adequately measure the program’s performance; effectively determine the number of IRIS assessments required to meet the statutory, regulatory, and programmatic needs of IRIS users; or know the extent to which unmet demand exists. EPA conducted its last evaluation of demand in 2003 at the request of Congress. In September 2000, due to concerns that EPA and state regulators were relying on potentially outdated scientific information, the Senate Committee on Appropriations requested that EPA conduct a needs assessment with public input to determine the need for increasing the annual rate of new and updated IRIS toxicity assessments.response to the Senate request, EPA conducted a needs assessment— the results of which are discussed in its September 2003 report, Needs In Assessment for U.S. EPA’s Integrated Risk Information System. According to the report, EPA estimated that 50 new or updated IRIS toxicity assessments a year were needed to meet user needs. Specifically, EPA estimated that completing 50 assessments annually would allow the agency to routinely update existing toxicity assessments in the IRIS database, as well as respond to immediate user needs for new or updated assessments. This estimate, according to the report, was based on EPA’s past experience soliciting nominations from EPA program offices and regions and information obtained through a July 2001 query of EPA program offices and regions and the public that requested information on which chemicals they considered priorities for assessment. However, based on our review of the report, we did not find sufficient support for the estimate. Specifically, the report did not describe how EPA’s past experience or its 2001 query were used to derive the report’s estimate that about 50 IRIS toxicity assessments per year were needed to meet demand. In addition, the report stated that because EPA received a small number of responses to the agency’s 2001 query, it is not clear if the responses received are necessarily representative of the broad range of IRIS users. Although EPA’s 2003 needs assessment is a decade old, EPA officials told us that the agency does not currently have plans to perform another evaluation of demand for the IRIS Program and that, due to changing conditions over the last 10 years the 2003 evaluation was not applicable to current conditions. IRIS officials stated that the IRIS Program’s primary mechanism for monitoring the needs of EPA’s program and regional offices at present is to perform outreach with EPA program offices and regions, such as holding quarterly meetings with program office representatives from each EPA program office and holding internal scoping meetings with representatives from EPA program offices and regions for certain chemicals. In response to our questions regarding current demand, IRIS Program officials told us that the annual need for IRIS toxicity assessments may likely be in the hundreds, though officials did not describe how they derived this number. We have previously reported on the need for EPA to comprehensively analyze its workload and workforce to effectively carry out its strategic goals and objectives. Specifically, in July 2011, we reported that the agency did not have a workload analysis to help determine the optimal numbers and distribution of staff among its laboratory enterprise—which is responsible for providing the scientific research, technical support, and analytical services that underpin its policies and regulations. In addition, we have previously reported that, in developing new initiatives, agencies can benefit from following leading practices for strategic planning. Congress enacted the Government Performance and Results Act (GPRA) in 1993 to improve efficiency and accountability of federal programs. We have reported that these requirements also can serve as leading practices for strategic planning at lower levels within federal agencies, such as planning for individual divisions, programs, or initiatives. Of these leading practices, it is particularly important for agencies to define strategies that address management challenges that threaten their ability to meet long-term goals—including a description of the resources needed to meet established goals. Without an evaluation of current demand for IRIS toxicity assessments that takes into account resource constraints, the IRIS Program risks not being able to develop a plan that lays out realistic goals based on current conditions. The IRIS Program’s chemical nomination and selection process, which the agency uses to gauge interest in the IRIS Program from users inside and outside of EPA, may not accurately reflect current demand for IRIS toxicity assessments. Our analysis of IRIS Program data indicates that the IRIS Program received nominations for 75 chemicals from EPA and non-EPA IRIS users in response to its most recent 2011 nomination period. However, the 75 chemicals received for the 2011 nomination period may not accurately reflect current demand for IRIS toxicity assessments. As about 1,000 new chemicals are listed for commercial use each year, demand for IRIS toxicity assessments is potentially very high, but the number of chemicals nominated may either overstate or understate actual demand. For example, it is not clear how many chemicals IRIS users did not nominate due to concerns that the IRIS toxicity assessment would not be completed in a timely manner. Officials from EPA’s Office of Water told us that even though they may need an IRIS toxicity assessment, they sometimes develop their own chemical toxicity assessments to meet their urgent or time-critical needs, such as meeting statutory deadlines. Also, given the long-standing challenges the IRIS Program has had in routinely starting new assessments, according to some EPA IRIS users, they chose not to nominate new chemicals for assessment and instead nominated chemicals that were already listed on the IRIS agenda as under way. For example, according to officials from EPA’s Office of Solid Waste and Emergency Response, due to the large number of chemicals already listed on the IRIS agenda and the IRIS Program’s limited resources, in some cases, they reiterated support for chemicals that were already listed on the agenda as under way rather than nominate new chemicals. IRIS Program officials told us that, although EPA program offices and regions and other IRIS users would like to see the IRIS Program produce more IRIS toxicity assessments each year, current resources constrain the speed at which the IRIS Program can complete them. For example, EPA issued 4 IRIS toxicity assessments in fiscal year 2012 (see fig. 2). EPA has issued from 2 to 11 IRIS toxicity assessments annually since fiscal year 2002. unable to keep up with demand for IRIS toxicity assessments, the agency had to prioritize its selection of chemicals for IRIS toxicity assessment. Furthermore, EPA has not clearly articulated under what circumstances IRIS toxicity assessments are not needed, and the IRIS Program’s process for prioritizing chemicals does not provide clarity regarding why specific chemicals are selected for assessment and others are not. According to IRIS Program officials, some chemicals may not need IRIS toxicity assessments. While the IRIS Program has developed criteria that are used to prioritize its selection of chemicals for IRIS toxicity assessment, it is not clear how it applies these criteria—including how it determines the circumstances under which program offices and regions may or may not need an IRIS toxicity assessment. As discussed earlier, the IRIS Program published its chemical selection criteria when it solicited nominations for IRIS toxicity assessment for its 2011 nomination period. However, in announcing that it had selected 15 IRIS toxicity assessments in its 2012 IRIS agenda, the IRIS Program did not explain and has not published information on, how the agency applied its selection criteria. OMB’s implementing guidance for internal control requirements for federal agencies emphasizes the need for agencies to develop policies that ensure the effectiveness and efficiency of their operations and, as part of that, emphasizes that information related to guidance should be communicated to relevant personnel at all levels within an organization and outside the agency in a relevant, reliable, and timely manner. In August 2012, IRIS Program officials told us that they were working to develop a better description of the nomination and selection process that will clarify how the agency applied the six criteria but, as of March 2013, had not done so. Consequently, for the chemicals that were nominated during the most recent 2011 nomination period, but not selected, it is not clear how many, if any, were excluded from consideration because they did not meet the IRIS Program’s selection criteria, because the IRIS Program determined that an IRIS toxicity assessment was not needed— or, alternatively, if they were not selected because of resource constraints or other reasons. EPA has not implemented an agencywide strategy for addressing the unmet needs of EPA program offices and regions when IRIS toxicity assessments are not available, applicable, or current. Specifically, EPA does not have (1) a strategy for identifying and filling data gaps that would enable it to conduct IRIS toxicity assessments for nominated chemicals that were not selected for IRIS toxicity assessment due to insufficient data and (2) agencywide guidance for addressing unmet needs when IRIS toxicity assessments are not available, applicable, or current—which is consistent with findings reported recently by EPA’s Inspector General and Science Advisory Board. EPA does not have a strategy for identifying and filling data gaps that would enable it to conduct IRIS toxicity assessments for nominated chemicals that were not selected for IRIS toxicity assessment because of insufficient scientific data from health studies. As discussed earlier, as part of the IRIS chemical nomination and selection process, IRIS Program officials separate nominated chemicals into two groups: (1) those for which sufficient scientific data from health studies exist that could be used to develop or update an IRIS toxicity assessment and (2) those for which sufficient data do not exist for developing an assessment. For example, as a part of its most recent 2011 nomination period, the IRIS Program dropped 11 of the 75 chemicals nominated from consideration because sufficient scientific data from health studies were not available to develop an IRIS toxicity assessment. One of the chemicals dropped from consideration due to insufficient data was nominated in 2005, 2007, and 2011. The chemical—iso-octane, or 2,2,4-trimethylpentane, which is a constituent of motor fuels—was nominated, according to officials with EPA’s Office of Underground Storage Tanks, within the Office of Solid Waste and Emergency Response, so that the office can determine appropriate cleanup levels for leaking underground storage tank sites. Moreover, Section 1505 of the Energy Policy Act of 2005 directed the EPA Administrator to, among other things, conduct a study on the effects on public health (e.g., the effects on children, pregnant women, minority or low-income communities, and other sensitive populations) of increased use of iso-octane and six other fuel additives as substitutes for methyl tertiary butyl ether (MTBE). While the IRIS Program prepared and issued an IRIS toxicity assessment that contained the qualitative hazard identification description of iso- octane in 2007, it was unable to derive quantitative IRIS values due to insufficient data on the chemical’s health effects in humans. According to EPA’s 2007 IRIS assessment of iso-octane, the IRIS Program did not develop quantitative estimates of noncancer and cancer risks because the studies needed to support such estimates were not available.Consequently, EPA’s Office of Underground Storage Tanks nominated iso-octane again in 2011. In response, according to IRIS Program officials, the IRIS Program evaluated the literature since the 2007 IRIS toxicity assessment was completed and determined that no new studies were available to support development of quantitative IRIS values. Therefore, iso-octane was not considered for an IRIS toxicity assessment in 2011. According to officials with the Office of Underground Storage Tanks, they meet with IRIS Program officials regularly, and the IRIS Program is aware of their need for IRIS toxicity assessments related to these chemicals. However, should officials with the Office of Underground Storage Tanks nominate iso-octane again; EPA cannot ensure that the data needed to prepare an IRIS toxicity assessment that includes quantitative IRIS values will be available and thus, allow EPA to address this unmet need. Without quantitative IRIS toxicity values for these chemicals, it is unclear how EPA will conduct a study of these chemicals on the effects on public health as required by the Energy Policy Act of 2005. The National Toxicology Program was created in 1978 as a cooperative effort to coordinate toxicology testing programs within the federal government, strengthen the science base in toxicology, develop and validate improved testing methods, and to provide information about potentially toxic chemicals to health, regulatory, and research agencies, scientific and medical communities, and the public. may also be a potential research source. Without such research—which is necessary to fill data gaps needed to develop IRIS toxicity assessments—the agency will be unable to ensure that it can respond to unmet EPA program offices’ and regions’ programmatic and public health needs in the future. EPA does not have agencywide guidance for addressing the needs of its program offices and regions when IRIS toxicity assessments are not available, applicable, or current. IRIS Program officials told us that, while there is no agencywide guidance, they work with staff from program offices and regions on a case-by-case basis to find alternatives to IRIS toxicity assessments. For example, IRIS Program officials told us that, in some cases, the Superfund Technical Support Center may be able to partially address the needs of the Office of Solid Waste or Emergency Response or regions by summarizing peer reviewed studies. In other cases, they said that they may work with the Office of Solid Waste and Emergency Response, as well as other program offices and regions, to determine if a PPRTV would meet their needs. In 2008, EPA’s Board of Scientific Counselors recommended that EPA consider using PPRTVs as an interim measure to meet its needs for some chemicals, if an IRIS toxicity assessment was not available, and recommended that well- developed PPRTVs be considered as a source of prioritization in the development of full IRIS documents. However, it is unclear how frequently program offices and regions use PPRTVs to support their statutory, regulatory, or programmatic needs—beyond their use in Superfund risk assessments—because EPA does not collect information on or have agencywide guidance on when a PPRTV, or other toxicity assessment, might be an appropriate alternative to an IRIS toxicity assessment. Without such guidance, EPA cannot ensure that it has a consistent approach for addressing the needs of program offices and regions when IRIS toxicity assessments are not available, applicable, or current. Under federal standards of internal control, agencies are to clearly document in writing internal control in management directives, administrative policies, or operating manuals and have it readily available for examination. Other non-EPA sources of toxicity information include ATSDR Minimal Risk Levels and Cal/EPA toxicity values. Although the Office of Solid Waste and Emergency Response developed the hierarchy of toxicity values specifically for the Superfund Program, officials stated that the hierarchy is generally used by all suboffices within the Office of Solid Waste and Emergency Response when IRIS toxicity assessments are not available or current. profile issues in their region because an IRIS toxicity assessment was not available. However, the officials noted that ATSDR does not develop cancer values. According to officials with the Office of Water—which is responsible for implementing, among other mandates, the Clean Water Act and Safe Drinking Water Act—they develop toxicity assessments for chemicals to meet statutory deadlines. For example, under the 1996 amendments to the Safe Drinking Water Act, every 5 years, EPA is to determine for at least five unregulated contaminants, including chemicals, whether regulation is warranted, considering those that present the greatest public health concern. Because of the limited number of IRIS toxicity assessments the IRIS Program can select and develop at one time, the Office of Water created the scheme to prioritize and nominate for IRIS toxicity assessment those chemicals that are the most controversial and high-profile, have a high economic impact, and will take more time and staff to complete. The Office of Water can then, according to officials from that office, develop its own assessments for chemicals that have less controversy surrounding them and take less time and staff to complete in order to meet some of its programmatic needs. Officials from the Office of Water told us that the office develops its own assessments for some chemicals because the IRIS Program would not be able to complete most of the needed toxicity assessments in time to meet the office’s statutory deadlines. Similarly, the Office of Pollution Prevention and Toxics, within the Office of Chemical Safety and Pollution Prevention, has developed its own toxicity assessments. The Office of Pollution Prevention and Toxics is responsible for implementing the Toxic Substances Control Act, which provides EPA with the authority to obtain more information on chemicals and to regulate those chemicals that the agency determines pose unreasonable risks to human health or the environment. In February 2012, the office announced plans to develop risk assessments on 83 chemicals. While the office has not nominated any chemicals for IRIS toxicity assessment over the past three nomination periods through the formal nomination process, according to EPA officials with the office, in developing its risk assessments, it plans to incorporate information from IRIS toxicity assessments to the extent such information is available, recent, and relevant. These officials told us that the risk assessments they are conducting in support of the Toxic Substances Control Act are often based on intermittent exposure to workers and consumers who are subject to chemicals contained in products. However, they also told us that, while the IRIS values contained in the database may not always be applicable, often other data available in the IRIS database are applicable, such as toxicity information for shorter term exposure scenarios that have long-lasting/persistent effects (e.g., development toxicity). In these cases, they said that they have used the hazard and dose-response information described in an IRIS toxicity assessment for a particular chemical to develop their own toxicity assessment. IRIS Program officials said that they are working with the Office of Pollution Prevention and Toxics and other EPA offices to find other options for assessing toxicity, such as PPRTVs, when IRIS toxicity assessments are not available, applicable, or current. While IRIS toxicity assessments may not be applicable in all situations, EPA does not have agencywide guidance that outlines the circumstances under which program offices and regions may or may not need IRIS toxicity assessments, or describes appropriate alternative sources to IRIS toxicity assessments. Our finding concerning the various approaches EPA program offices and regions use to address their need for toxicity assessments is consistent with findings reported recently by EPA’s Inspector General and Science Advisory Board. EPA’s Office of the Inspector General conducted a survey of 300 respondents from EPA program offices and regions in January 2013. The survey found that 34 percent of the 300 survey respondents indicated that they had experienced a situation in which they or their team researched a substance that was listed in IRIS but used toxicity values from another source instead of those available in IRIS. Of those respondents, 68 percent indicated that one of their top three reasons for doing so was because the alternate system source is more up-to-date with current scientific practice or other information. Additionally, 28 percent of all survey respondents indicated that they had experienced a situation in which they or their team developed their own toxicity values. However, more than a third of respondents indicated that there were no standard operating procedures or other guidance regarding how to choose a source of toxicity values for their office’s work. EPA’s Science Advisory Board has also reported on differences across the agency regarding the use of scientific information for decision making. For example, in July 2012, the Science Advisory Board reported that available resources for developing toxicity assessments, the number of scientific staff engaged in the work, and the institutional and legal framework supporting these assessments differ across the agency. The report also noted that some EPA programs and regions do not have the infrastructure required to generate all assessments needed to support their own activities and that scientists in these offices work within statutory constraints, often on an extremely short timetable and with limited budgets. Within those constraints, according to the report, they either assess available scientific information themselves or rely on the Office of Research and Development, other parts of EPA, or other federal or state agencies for the science assessments needed to support decision making. We have also reported on EPA’s fragmented and largely uncoordinated science activities. Specifically, in July 2011, we reported that EPA had not fully addressed the findings and recommendations of five independent evaluations over the past 20 years regarding long-standing planning, coordination, or leadership issues that hamper the quality, effectiveness, and efficiency of EPA’s science activities, including its laboratory operations. We recommended, among other things, that EPA establish a top-level science official with the authority and responsibility to coordinate, oversee, and make management decisions regarding major scientific activities throughout the agency, including the work of all program, regional, and Office of Research and Development laboratories.fully implemented it. In particular, while EPA expanded the responsibilities of the agency’s science advisor to coordinate, oversee, and make recommendations to EPA’s Administrator regarding the agency’s major scientific activities, as of March 2013, the agency had not given this official the authority to make management decisions regarding scientific activities across EPA as we recommended. In the absence of such authority, there is no agency mechanism for understanding and addressing the unmet needs for IRIS toxicity assessments. As a result, EPA may not be maximizing its limited resources or addressing the statutory, regulatory, and programmatic needs of EPA program offices and regions in a consistent manner. With tens of thousands of chemicals listed with EPA for commercial use in the United States and about 1,000 new chemicals listed for commercial use each year, demand for IRIS toxicity assessments is potentially very high. EPA’s IRIS Program develops new toxicity assessments and, as needed, updates information on existing toxicity assessments contained in the IRIS database. EPA has not evaluated demand for IRIS toxicity assessments with input from users inside and outside the agency since 2003, and although IRIS Program officials recognize that the 2003 estimate does not reflect current conditions, the agency does not plan to perform another evaluation of demand. Without a clear understanding of current demand for IRIS toxicity assessments, EPA cannot measure the program’s performance; determine the number of IRIS assessments required to meet the statutory, regulatory, and programmatic needs of IRIS users; or know the extent of unmet demand. The IRIS Program’s chemical nomination and selection process, which the agency uses to gauge interest in the IRIS Program from users inside and outside EPA, may not accurately reflect current demand for IRIS toxicity assessments. For example, it is not clear how many chemicals IRIS users did not nominate due to concerns that the IRIS toxicity assessment would not be completed in a timely manner. Furthermore, EPA has not clearly articulated how the IRIS Program applies the criteria it uses to prioritize the selection of chemicals for IRIS toxicity assessment—including how it determines the circumstances under which program offices and regions may or may not need an IRIS toxicity assessment. Consequently, for chemicals that are nominated, but not selected for IRIS toxicity assessment, it is not clear how many, if any, are excluded from consideration because they do not meet the IRIS Program’s selection criteria, because the IRIS Program determined that an IRIS toxicity assessment was not needed—or, alternatively, if they are not selected because of resource constraints or other reasons. EPA has not implemented an agencywide strategy for addressing the unmet needs of EPA program offices and regions when IRIS toxicity assessments are not available, applicable, or current. Specifically, EPA does not have a strategy for identifying and filling data gaps that would enable it to conduct IRIS toxicity assessments for nominated chemicals that were not selected for assessment due to insufficient data. Because EPA does not have a process in place for identifying and filling research gaps, it is unable to ensure it can respond to any unmet EPA program offices’ and regions’ programmatic and public health needs in the future. Also, EPA does not have guidance that outlines the circumstances under which program offices and regions may or may not need an IRIS toxicity assessment, or that describes appropriate alternative sources to IRIS toxicity assessments. Without guidance, EPA cannot ensure a consistent approach for addressing the needs of program offices and regions when IRIS toxicity assessments are not available, applicable, or current. We are making three recommendations to the EPA Administrator. To ensure that EPA can measure the IRIS program’s performance and determine the number of IRIS toxicity assessments required to meet the statutory, regulatory, and programmatic needs of IRIS users, we recommend that the EPA Administrator direct the Office of Research and Development to implement the following two actions without impeding the progress of ongoing assessments: Identify and evaluate demand for the IRIS Program to determine the number of IRIS toxicity assessments and resources required to meet users’ needs. Document how EPA applies its IRIS toxicity assessment selection criteria, including the circumstances under which program offices and regions may or may not need an IRIS toxicity assessment. To ensure that EPA maximizes its limited resources and addresses the statutory, regulatory, and programmatic needs of EPA program offices and regions when IRIS toxicity assessments are not available, we recommend that the EPA Administrator direct the Deputy Administrator, in coordination with EPA’s Science Advisor, to implement the following action: Once demand for the IRIS Program is determined, develop an agencywide strategy to address the unmet needs of EPA program offices and regions that includes, at a minimum: coordination across EPA offices and with other federal research agencies to help identify and fill data gaps that preclude the agency from conducting IRIS toxicity assessments, and guidance that describes alternative sources of toxicity information and when it would be appropriate to use them when IRIS values are not available, applicable, or current. We provided a draft of this report to EPA for its review and comment. EPA’s written comments and our detailed response to them are presented in appendix III. EPA also provided technical comments on our draft report, which we incorporated, as appropriate. In its written comments, EPA agreed with our findings and two of our recommendations and partially agreed with our third recommendation. Specifically, EPA agreed with our recommendations that the Office of Research and Development (1) identify and evaluate demand for the IRIS Program to determine the number of IRIS toxicity assessments and resources required to meet users’ needs and (2) document how EPA applies its IRIS toxicity assessment selection criteria, including the circumstances under which program offices and regions may or may not need an IRIS toxicity assessment. In its written comments, EPA stated that the Office of Research and Development this year will evaluate the potential future demand for IRIS toxicity assessments and the resources required to meet that demand. EPA also stated that it will better describe for internal and external stakeholders and the public the nomination and selection process for chemicals for IRIS toxicity assessments, including the rationale for not selecting nominated chemicals for IRIS assessment. With respect to our third recommendation, that EPA develop an agencywide strategy to address unmet need for IRIS toxicity assessments, in its written comments, EPA requested that we provide additional clarification and consider refining our recommendation. Specifically, EPA stated that it understood and supported the goal of developing an agencywide strategy to help identify and fill data gaps that preclude the agency from conducting IRIS toxicity assessments, but urged us to clarify more precisely the extent to which it must rely on others to conduct research to fill data gaps on IRIS chemicals. As we note in the report, IRIS Program officials told us that they do not have a process in place for filling research gaps and acknowledged that better coordination across EPA offices and with other federal research agencies, such as the Department of Health and Human Services’ National Toxicology Program, could help address this issue. We acknowledge that EPA has limited resources, which may preclude the agency from making substantial investments in research into how individual chemicals affect human health. As such, to ensure that the agency maximizes its limited resources, we have recommended that EPA develop a strategy to coordinate with other federal research agencies to help identify and fill data gaps. In this context, EPA acknowledged that it must look to other federal agencies, academic institutions, and chemical product producers to fund research into how chemicals affect human health, as we have recommended. EPA also stated, in its written comments, that the agency can and will do a more effective job to make data needs known to relevant federal agencies and nonfederal organizations that either fund or conduct chemical research. Also regarding our third recommendation that EPA develop an agencywide strategy to address unmet needs for IRIS toxicity assessments, based on technical comments provided by EPA officials prior to receiving the agency’s letter dated April 16, 2013, we refined the wording of our third recommendation. The original text recommended that EPA develop guidance that describes alternative sources of toxicity information and procedures for preparing toxicity assessments when IRIS values are not available, applicable, or current. We refined the wording of this recommendation to read: guidance that describes alternative sources of toxicity information and when it would be appropriate to use them when IRIS values are not available, applicable, or current. The revised language more accurately reflects the intent of our recommendation. In addition, in its written comments, EPA stated that it understood our interest in the agency developing guidance that describes alternative sources of toxicity information and agreed that such guidance might be helpful. However, EPA stated that the development of such guidance is best left to individual EPA programs. We disagree. In the absence of agencywide guidance that addresses unmet demand for IRIS assessments, EPA offices operate in much the same way they operated before the IRIS Program was formed to develop consensus opinions within the agency about the health effects from chronic exposure to chemicals. As we note in this report, we have previously reported on EPA’s fragmented and largely uncoordinated science activities and recommended, among other things, that EPA establish a top-level science official with the authority and responsibility to coordinate, oversee, and make management decisions regarding major scientific activities throughout the agency. Consistent with our prior report and recommendation, we believe that guidance regarding major scientific activities should also come from a top-level science official. However, as we note in our current report, EPA has not provided its Science Advisor with the authority to make management decisions regarding scientific activities across EPA as we previously recommended. Therefore, we believe that agencywide guidance should come from EPA’s Deputy Administrator in coordination with EPA’s Science Advisor. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Acting Administrator of EPA, the appropriate congressional committees, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine the extent to which the Environmental Protection Agency (EPA) has evaluated demand for IRIS toxicity assessments, we reviewed EPA’s 2003 evaluation of demand for IRIS toxicity assessments. We also interviewed IRIS Program officials to determine whether they had conducted other evaluations of demand since 2003, how they derived the 2003 estimate, and whether that estimate reflects current conditions. Because the 2003 evaluation did not provide sufficient information on its methodology, we were unable to fully assess its estimate. We corroborated EPA officials’ assertion that the 2003 assessment no longer reflects current conditions based on our understanding of the IRIS Program. As we discussed earlier, the importance of EPA’s IRIS Program has increased over time as EPA program offices and regions have increasingly relied on IRIS toxicity assessments in making environmental protection and risk management decisions. In addition, as about 1,000 new chemicals are listed for commercial use each year, potential for changes in demand over time are likely. To determine the extent to which EPA’s process for nominating and selecting chemical for IRIS toxicity assessment accurately reflects current demand, we reviewed data provided by the IRIS Program and from the IRIS Program’s website on the number of IRIS toxicity assessments it completed annually from fiscal years 2002 through 2012. In addition, we analyzed all chemical nomination forms submitted by EPA program offices and regions to the IRIS Program from 2005, 2007 and 2011— which were the last three times that EPA solicited nominations for new and updated IRIS toxicity assessments. For additional perspective on user needs, we reviewed non-EPA IRIS users’ chemical nomination forms from 2011. To select and count the number of nominations, two analysts reviewed information EPA provided us to determine which documents to include in our analysis. We used the following inclusion and exclusion criteria to determine which documents to include in our analysis: Documents labeled as being a nomination were included while documents labeled as being another document were excluded. For example, some documents were the IRIS Program’s request for nominations or Federal Register Notices, which were not included in our analysis. In other instances, a nominating entity indicated that the chemicals were being prioritized and not nominated (i.e., they clearly stated “this is a list of our priorities”), and were not included in our analysis. We included individual nomination sheets for our analysis, but we did not include nomination cover sheets or separate documents that IRIS Program officials sent us separately from the individual nomination sheets. We did not include the nomination form if the nominating entity indicated on the form that there was no new nomination and instead was a reiteration of support for a previous nomination. In some instances, EPA program offices and regions nominated two chemicals on one nomination form or listed two chemicals together. Chemicals were counted as a single nomination if they had the same Chemical Abstracts Service (CAS) registry number and as separate nominations if they had different CAS registry numbers. According to the CAS registry website, the CAS registry is the most authoritative collection of disclosed chemical substance information, containing more than 70 million organic and inorganic substances and 64 million sequences. In cases where it was not clear to both analysts whether to include a document in our analysis, IRIS Program officials provided confirmation. While we used this methodology to determine the number of nominations in the 2005, 2007, and 2011 nomination periods, using a different methodology might result in a different number of nominations. As about 1,000 new chemicals are listed for commercial use each year, the chemicals nominated may either overstate or understate actual demand. In addition, we reviewed the IRIS Program’s processes for soliciting nominations and selecting chemicals for IRIS toxicity assessments, including the agency’s selection criteria. We also reviewed agency guidance and interviewed IRIS Program officials to better understand the chemical nomination and selection process. To determine the extent to which EPA has implemented a strategy for addressing any unmet needs of EPA program offices and regions when IRIS toxicity assessments are not available, applicable, or current, we reviewed the IRIS Program’s efforts to analyze IRIS user chemical nominations. For context, we interviewed officials from EPA’s National Center for Environmental Assessment, which manages the IRIS Program and develops IRIS toxicity assessments. For additional perspective, we interviewed officials using a standard set of questions from a nonprobability sample of three EPA program offices and one region: the Office of the Administrator, the Office of Water, the Office of Solid Waste and Emergency Response, and EPA’s Region 2. We selected these program offices and region because they submitted 78 percent of the chemical nominations to the IRIS Program during the period we reviewed—2005, 2007, and 2011. These offices and region ranked the highest in terms of the number of chemical nominations submitted, and, in some cases, nominated chemicals more than once during different nomination years. Because this is a nonprobability sample, it is not generalizable to all EPA program offices and regions, but it can provide illustrative examples of the experience of those EPA program offices and one region that nominated 78 percent of chemicals for IRIS toxicity assessment during the period we reviewed. For example, we received information from officials from these offices about how EPA program offices and one region nominate chemicals for IRIS toxicity assessment, how the IRIS Program meets the needs of these offices and region over the course of nomination periods, and what alternative toxicity assessments these offices and region turn to when IRIS toxicity assessments are not available. For a summary of approaches used by selected EPA program offices and regions to address their IRIS toxicity assessment needs, see appendix II. Separately from our nonprobability sample, we also interviewed officials from the Office of Pollution Prevention and Toxics, within the Office of Chemical Safety and Pollution Prevention, because it did not nominate any chemicals for IRIS toxicity assessment for any of the last three nomination periods. We did not evaluate the scientific content or quality of IRIS toxicity assessments. We conducted this performance audit from April 2012 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. According to officials at the Office of Solid Waste and Emergency Response, the office nominated 18 chemicals over the course of the 2005, 2007, and 2011 nomination periods. The Office of Solid Waste and Emergency Response provides policy, guidance, and direction for the agency’s emergency response and waste programs, including managing the Superfund Program, which responds to abandoned and active hazardous waste sites and accidental oil and chemical releases. IRIS toxicity assessments are used by the Office of Solid Waste and Emergency Response to, among other things, support mandated regulatory actions. For example, the Office of Underground Storage Tanks, within the Office of Solid Waste and Emergency Response, submitted chemicals during the 2011 nomination period to support the requirement under Section 1505 of the Energy Policy Act of 2005 that the EPA Administrator conduct a study on the effects on public health of increased use of iso-octane and six other fuel additives as substitutes for methyl tertiary butyl ether (MTBE). When IRIS toxicity assessments are not available or current, Office of Solid Waste and Emergency Response officials stated they rely on other toxicity values to meet their programmatic needs. For example, officials at the Office of Underground Storage Tanks stated that, in the absence of IRIS values, states must resort to other sources for toxicological information, and this can lead to inconsistencies state-to-state. Officials also stated that, when an IRIS toxicity assessment is not available, the office refers to a hierarchy of toxicity values to be used in performing human health risk assessments for Superfund sites. In 2003, the Office of Solid Waste and Emergency Response updated this hierarchy, which is intended to help risk assessors identify appropriate sources of toxicology information and lists the sources as: (1) IRIS toxicity values, (2) Provisional Peer Reviewed Toxicity Values (PPRTVs), and (3) other EPA and non-EPA sources of toxicity information—with priority given to those sources of information that are the most current, publicly available, and peer reviewed. Such values include the Agency for Toxic Substances and Disease Registry (ATSDR) Minimal Risk Levels and California Environmental Protection Agency (Cal/EPA) toxicity values. Although developed specifically for the Superfund Program, officials stated that this guidance is generally used by all suboffices within the Office of Solid Waste and Emergency Response. According to Region 2 officials, the region nominated 22 chemicals over the course of the 2005, 2007, and 2011 nomination periods. EPA’s Region 2 serves New Jersey, New York, Puerto Rico, the U.S. Virgin Islands, and eight tribal nations. Region 2 nominates chemicals for IRIS assessment on behalf of risk assessors throughout their region—that is, EPA and officials throughout the region, primarily at Superfund sites, that evaluate chemical risks. For example, Region 2 stated in its chemical nomination form for the 2011 nomination period that it needed IRIS toxicity assessments to support cleanup decisions for chemicals present in residential properties and in groundwater. Region 2 officials indicated that, when IRIS toxicity values are not available, they may rely on other toxicity values to meet their programmatic needs and follow the Office of Solid Waste and Emergency Response’s hierarchy of values in consultation with the IRIS Program as appropriate. Region 2 officials stated that in some cases, other organizations such as ATSDR or Cal/EPA may develop a quantitative value before the IRIS toxicity assessment is revised. In that case, the Region would consider the use of the quantitative value based on discussions with the IRIS program. According to Office of Water officials, the office nominated 23 chemicals over the course of the 2005, 2007, and 2011 nomination periods. EPA’s Office of Water is responsible for drinking water safety, and it restores and maintains oceans, watersheds, and their aquatic ecosystems. The Office of Water is responsible for implementing, among other mandates, the Clean Water Act and Safe Drinking Water Act. For example, in its chemical nomination form for the 2011 nomination period, the Office of Water stated that it needed IRIS toxicity assessments to develop regulations. The Office of Water develops assessments for chemicals it needs to meet statutory deadlines. Because of the limited number of IRIS toxicity assessments the IRIS Program can select and develop at one time, the Office of Water created the scheme to prioritize those chemicals that are the most controversial and high-profile, have a high economic impact, and will take more staff and time to complete. The Office of Water can then develop its own assessments for chemicals that have less controversy surrounding them and take less time and staff to complete in order to meet some of its programmatic needs. Officials stated that developing the office’s own assessments for some chemicals was based on the reality that the IRIS Program would not complete most of the needed toxicity assessments in time to meet the office’s statutory deadlines. According to officials at the Office of the Administrator, the office nominated 26 chemicals over the course of the 2005, 2007, and 2011 nomination periods. EPA’s Office of the Administrator provides executive and logistical support for the EPA Administrator. The office supports the leadership of EPA’s programs and activities to protect human health and the environment. An official from the Office of Policy, within the Office of the Administrator, stated that rationales for nominating chemicals varied widely—for example, the increasing or widespread exposure to a chemical or the availability of new data to develop a new or update an existing IRIS toxicity assessment. The official noted that the Office of Policy’s programmatic needs differed from other EPA offices’ needs in that it does not develop regulations or risk assessments. Instead, the office provides assistance for other EPA offices’ assessments and reviews assessments that other offices perform. In the absence of an IRIS toxicity assessment, the Office of Policy relied on the original literature, review articles, and assessments prepared by other agencies. Such values include the ATSDR Minimal Risk Levels and Cal/EPA toxicity values. An official with the Office of Children’s Health Protection, also within the Office of the Administrator, stated that most of its nominations were for chemicals that were under the Toxic Substances Control Act or the Safe Drinking Water Act and were based on children’s health concerns. The official stated that, in the absence of an IRIS toxicity assessment, the Office of Children’s Health Protection goes directly to the literature or work done by other government agencies or programs, such as the National Toxicology Program. The Office of Pollution Prevention and Toxics is responsible for implementing the Toxic Substances Control Act, which provides EPA with the authority to obtain more information on chemicals, and to regulate those chemicals that the agency determines pose unreasonable risks to human health or the environment, announced in February 2012 its plans to develop risk assessments on 83 chemicals. While the office has not nominated any chemicals for IRIS toxicity assessment over the past three nomination periods through the formal nomination process, according to EPA officials with the Office of Pollution Prevention and Toxics, in developing its risk assessments, it plans to incorporate information from IRIS toxicity assessments to the extent such information is available, recent, and relevant. Officials at the Office of Pollution Prevention and Toxics, and senior staff of the Office of Research and Development, which houses the IRIS Program, have compared the list of existing and ongoing IRIS toxicity assessments in order to share relevant literature and hazard reviews for upcoming risk assessments related to the Toxic Substances Control Act. These officials told us that the risk assessments they are conducting in support of the Toxic Substances Control Act are often based on intermittent exposure to workers and consumers who are subject to chemicals contained in products. However, they also told us that, while the IRIS values contained in the database may not always be applicable, often other data available in the IRIS database are applicable, such as toxicity information for shorter term exposure scenarios that have long-lasting/persistent effects (e.g., development toxicity). In these cases, they said that they have used the hazard and dose response information described in an IRIS toxicity assessment for a particular chemical to develop their own toxicity assessment. In addition, according to IRIS program officials, they have compared the list of chemicals for which the Office of Pollution Prevention and Toxics plans to conduct risk assessments with the list of existing and ongoing toxicity assessments and shared relevant literature and hazard reviews. IRIS program officials also said that the Office of Pollution Prevention and Toxics participates with other EPA offices in prioritizing their needs for ongoing IRIS toxicity assessments. IRIS toxicity values are generally used to estimate risks associated with continuous exposures to a pollutant in the air or water. In most cases, according to IRIS Program officials, the information used to develop the dose-response assessments is based on intermittent exposures to workers or animals in a controlled environment, and IRIS assessments include an adjustment to continuous exposure in the derivation of toxicity values. IRIS Program officials said that they are working with the Office of Pollution Prevention and Toxics and other EPA offices to find other options for assessing toxicity, such as PPRTVs, when IRIS toxicity assessments are not available, applicable, or current. The following are GAO’s comments on the letter from the Environmental Protection Agency dated April 16, 2013. 1. In this report, we do not discuss the challenges associated with suspending the development of an ongoing IRIS toxicity assessment to await new research and, therefore, our recommendations are not aimed at addressing this issue. Instead, our report is concerned with data gaps that preclude EPA from starting an IRIS assessment. However, we have addressed issues concerning suspending the development of an ongoing IRIS assessment in a prior report. Specifically, in our 2008 report on EPA’s IRIS Program, we note that, as a general rule, requiring that IRIS assessments be based on the best science available at the time of the assessment is a standard that would best support a goal of completing assessments within reasonable time periods and minimizes the need to conduct significant levels of rework. In our 2008 report, we recommended that EPA establish a policy that endorses conducting IRIS assessments on the basis of peer-reviewed scientific studies available at the time of the assessment and develop criteria for allowing assessments to be suspended to await the completion of scientific studies only under exceptional circumstances. As of the date of this report, EPA has not implemented our 2008 recommendation. 2. We have reported in the past that EPA has found many provisions of the Toxic Substances Control Act difficult to implement, and we have suggested that Congress consider making statutory changes to strengthen EPA’s authority to obtain toxicity information from the chemical industry. However, as we note in our March 2013 report on the Toxic Substances Control Act, EPA has not pursued all opportunities to obtain chemical data using its existing authorities under the law. We agree that robust collaboration between the IRIS and Toxic Substances Control Act Programs could improve EPA’s ability to develop chemical assessments in a timely manner. 3. We continue to believe that agencywide guidance is needed that describes alternative sources of toxicity information and when it would be appropriate to use them when IRIS values are not available, applicable or current. As we note in this report, we have previously reported on EPA’s fragmented and largely uncoordinated science activities and recommended, among other things, that EPA establish a top-level science official with the authority and responsibility to coordinate, oversee, and make management decisions regarding major scientific activities throughout the agency.recommendation, we believe that guidance regarding major scientific activities should also come from a top-level science official. However, as we note in our current report, EPA has not provided its Science Advisor with the authority to make management decisions regarding scientific activities across EPA as we previously recommended. Therefore, we believe that agencywide guidance should come from EPA’s Deputy Administrator in coordination with EPA’s Science Advisor. Consistent with our prior report and 4. We recognize that EPA is responsible for assessing and managing environmental risks based on many laws with different requirements and that program offices and regions may not always need IRIS toxicity assessments. However, as we note in our report, EPA has not clearly articulated under what circumstances IRIS toxicity assessments are not needed. Moreover, in cases where program offices and regions have indicated a need for an IRIS toxicity assessment, but an assessment is not available, applicable, or current, EPA does not have guidance that describes alternative sources of toxicity information and when it would be appropriate to use them. In addition to the individual named above, Diane G. LoFaro, Assistant Director; Summer Lingard-Smith; and Marie Webb made key contributions to this report. Important contributions were also made by Mark Braza, Janice Ceperich, Nirmal Chaudhary, Richard Johnson, Cynthia Norris, Aaron Shiffrin, and Kiki Theodoropoulos.
|
EPA created the IRIS database in 1985 to help develop consensus opinions within the agency about the health effects from chronic exposure to chemicals. The health effects information in IRIS--referred to as IRIS toxicity assessments--provides fundamental scientific information EPA needs to develop human health risk assessments. GAO was asked to review the effectiveness of EPA's implementation of its IRIS toxicity assessment process. This report determines the extent to which (1) EPA has evaluated demand for IRIS toxicity assessments from users inside and outside EPA; (2) EPA's process for nominating and selecting chemicals for IRIS toxicity assessment accurately reflects demand; and (3) EPA has implemented a strategy for addressing any unmet agency needs when IRIS toxicity assessments are not available, applicable, or current. To do this work, GAO reviewed and analyzed IRIS nomination data, among other things, and interviewed EPA officials. GAO did not evaluate the scientific content or quality of IRIS toxicity assessments. The Environmental Protection Agency (EPA) has not conducted a recent evaluation of demand for Integrated Risk Information System (IRIS) toxicity assessments with input from users inside and outside EPA. Specifically, EPA issued a needs assessment report in 2003, which estimated that 50 new or updated IRIS toxicity assessments were needed each year to meet users' needs. However, GAO did not find sufficient support for the estimate. In addition, IRIS Program officials recognize that the 2003 estimate does not reflect current conditions, but the agency does not plan to perform another evaluation of demand. Without a clear understanding of current demand for IRIS toxicity assessments, EPA cannot adequately measure the program's performance; effectively determine the number of IRIS toxicity assessments required to meet the needs of IRIS users; or know the extent of unmet demand. The IRIS Program's chemical nomination and selection process, which the agency uses to gauge interest in the IRIS Program from users inside and outside of EPA, may not accurately reflect current demand for IRIS toxicity assessments. The 75 chemicals that were nominated in response to EPA's most recent 2011 nomination period may not reflect demand for a number of reasons. For example, given the long-standing challenges the IRIS Program has had in routinely starting new assessments, according to some EPA IRIS users, they chose not to nominate new chemicals for assessment. Also, EPA has not clearly articulated how the IRIS Program applies the criteria it uses to prioritize the selection of chemicals for IRIS toxicity assessment--including how it determines the circumstances under which an IRIS toxicity assessment is or is not needed. Consequently, for chemicals that were nominated but not selected for assessment, it is not clear how many, if any, were excluded from consideration because they did not meet the IRIS Program's selection criteria because the IRIS Program determined that an IRIS toxicity assessment was not needed--or, alternatively, if they were not selected due to resource constraints or other reasons. EPA has not implemented an agencywide strategy for addressing the unmet needs of EPA program offices and regions when IRIS toxicity assessments are not available, applicable, or current. Specifically, EPA does not have a strategy for identifying and filling data gaps that would enable it to conduct IRIS toxicity assessments for nominated chemicals that are not selected for assessment because sufficient data from health studies are not available. IRIS Program officials stated that no agencywide mechanism exists for EPA to ensure that chemicals without sufficient scientific data during one nomination period will have such information by the next nomination period or even the one after that. These officials acknowledged that better coordination across EPA and with other federal agencies could help address the issue. EPA also does not have agencywide guidance for addressing unmet needs when IRIS toxicity assessments are not available, applicable, or current. In the absence of agencywide guidance, officials from select EPA offices stated that they used a variety of alternatives to IRIS toxicity assessments to meet their needs, including using toxicity information from other EPA offices or other federal agencies. GAO recommends that EPA evaluate demand for IRIS assessments; document how the agency applies its selection criteria, including the circumstances under which an IRIS toxicity assessment is or is not needed and; develop an agencywide strategy including, at a minimum, coordination across EPA offices, as well as with other federal agencies, to identify and fill data gaps, and providing guidance that describes alternative sources of toxicity information. EPA agreed with the first two recommendations and partially agreed with the third.
|
Historically, tribes have been granted federal recognition through treaties, by the Congress, or through administrative decisions within the executive branch— principally by the Department of the Interior. In a 1977 report to the Congress, the American Indian Policy Review Commission criticized the criteria used by the department to assess whether a group should be recognized as a tribe. Specifically, the report stated that the criteria were not very clear and concluded that a large part of the department’s tribal recognition policy depended on which official responded to the group’s inquiries. Until the 1960s, the limited number of requests by groups to be federally recognized gave the department the flexibility to assess a group’s status on a case-by-case basis without formal guidelines. However, in response to an increase in the number of requests for federal recognition, the department determined that it needed a uniform and objective approach to evaluate these requests. In 1978, it established a regulatory process for recognizing tribes whose relationship with the United States had either lapsed or never been established—although tribes may seek recognition through other avenues, such as legislation or Department of the Interior administrative decisions unconnected to the regulatory process. In addition, not all tribes are eligible for the regulatory process. For example, tribes whose political relationship with the United States has been terminated by Congress, or tribes whose members are officially part of an already recognized tribe, are ineligible to be recognized through the regulatory process and must seek recognition through other avenues. The regulations lay out seven criteria that a group must meet before it can become a federally recognized tribe. Essentially, these criteria require the petitioner to show that it is a distinct community that has continuously existed as a political entity since a time when the federal government broadly acknowledged a political relationship with all Indian tribes. The burden of proof is on petitioners to provide documentation to satisfy the seven criteria. A technical staff within BIA, consisting of historians, anthropologists, and genealogists, reviews the submitted documentation and makes its recommendations on a proposed finding either for or against recognition. Staff recommendations are subject to review by the department’s Office of the Solicitor and senior officials within BIA. The Assistant Secretary-Indian Affairs makes the final decision regarding the proposed finding, which is then published in the Federal Register and a period of public comment, document submission, and response is allowed. The technical staff reviews the comments, documentation, and responses and makes recommendations on a final determination that are subject to the same levels of review as a proposed finding. The process culminates in a final determination by the Assistant Secretary who, depending on the nature of further evidence submitted, may or may not rule the same as the proposed finding. Petitioners and others may file requests for reconsideration with the Interior Board of Indian Appeals. While we found general agreement on the seven criteria that groups must meet to be granted recognition, there is great potential for disagreement when the question before the BIA is whether the level of available evidence is high enough to demonstrate that a petitioner meets the criteria. The need for clearer guidance on criteria and evidence used in recognition decisions became evident in a number of recent cases when the previous Assistant Secretary approved either proposed or final decisions to recognize tribes when the staff had recommended against recognition. Much of the current controversy surrounding the regulatory process stems from these cases. For example, concerns over what constitutes continuous existence have centered on the allowable gap in time during which there is limited or no evidence that a petitioner has met one or more of the criteria. In one case, the technical staff recommended that a petitioner not be recognized because there was a 70-year period for which there was no evidence that the petitioner satisfied the criteria for continuous existence as a distinct community exhibiting political authority. The technical staff concluded that a 70-year evidentiary gap was too long to support a finding of continuous existence. The staff based its conclusion on precedent established through previous decisions in which the absence of evidence for shorter periods of time had served as grounds for finding that petitioners did not meet these criteria. However, in this case, the previous Assistant Secretary determined that the gap was not critical and issued a proposed finding to recognize the petitioner, concluding that continuous existence could be presumed despite the lack of specific evidence for a 70- year period. The regulations state that lack of evidence is cause for denial but note that historical situations and inherent limitations in the availability of evidence must be considered. The regulations specifically decline to define a permissible interval during which a group could be presumed to have continued to exist if the group could demonstrate its existence before and after the interval. They further state that establishing a specific interval would be inappropriate because the significance of the interval must be considered in light of the character of the group, its history, and the nature of the available evidence. Finally, the regulations also note that experience has shown that historical evidence of tribal existence is often not available in clear, unambiguous packets relating to particular points in time. The department grappled with the issue of how much evidence is enough when it updated the regulations in 1994 and intentionally left key aspects of the criteria open to interpretation to accommodate the unique characteristics of individual petitions. Leaving key aspects open to interpretation increases the risk that the criteria may be applied inconsistently to different petitioners. To mitigate this risk, BIA uses precedents established in past decisions to provide guidance in interpreting key aspects in the criteria. However, the regulations and accompanying guidelines are silent regarding the role of precedent in making decisions or the circumstances that may cause deviation from precedent. Thus, petitioners, third parties, and future decisionmakers, who may want to consider precedents in past decisions, have difficulty understanding the basis for some decisions. Ultimately, BIA and the Assistant Secretary will still have to make difficult decisions about petitions when it is unclear whether a precedent applies or even exists. Because these circumstances require judgment on the part of the decisionmaker, public confidence in the BIA and the Assistant Secretary as key decisionmakers is extremely important. A lack of clear and transparent explanations for their decisions could cast doubt on the objectivity of the decisionmakers, making it difficult for parties on all sides to understand and accept decisions, regardless of the merit or direction of the decisions reached. Accordingly, in our November report, we recommend that the Secretary of the Interior direct the BIA to provide a clearer understanding of the basis used in recognition decisions by developing and using transparent guidelines that help interpret key aspects of the criteria and supporting evidence used in federal recognition decisions. The department, in commenting on a draft of this report, generally agreed with this recommendation. Because of limited resources, a lack of time frames, and ineffective procedures for providing information to interested third parties, the length of time needed to rule on petitions is substantial. The workload of the BIA staff assigned to evaluate recognition decisions has increased while resources have declined. There was a large influx of completed petitions ready to be reviewed in the mid-1990s. Of the 55 completed petitions that BIA has received since the inception of the regulatory process in 1978, 23 (or 42 percent) were submitted between 1993 and 1997 (see fig. 1). The chief of the branch responsible for evaluating petitions told us that, based solely on the historic rate at which BIA has issued final determinations, it could take 15 years to resolve all the currently completed petitions. In contrast, the regulations outline a process for evaluating a completed petition that should take about 2 years. Compounding the backlog of petitions awaiting evaluation is the increased burden of related administrative responsibilities that reduce the time available for BIA’s technical staff to evaluate petitions. Although they could not provide precise data, members of the staff told us that this burden has increased substantially over the years and estimate that they now spend up to 40 percent of their time fulfilling administrative responsibilities. In particular, there are substantial numbers of Freedom of Information Act (FOIA) requests related to petitions. Also, petitioners and third parties frequently file requests for reconsideration of recognition decisions that need to be reviewed by the Interior Board of Indian Appeals, requiring the staff to prepare the record and response to issues referred to the Board. Finally, the regulatory process has been subject to an increasing number of lawsuits from dissatisfied parties, filed by petitioners who have completed the process and been denied recognition, as well as current petitioners who are dissatisfied with the amount of time it is taking to process their petitions. Staff represents the vast majority of resources used by BIA to evaluate petitions and perform related administrative duties. Despite the increased workload faced by the BIA’s technical staff, the available staff resources to complete the workload have decreased. The number of BIA staff members assigned to evaluate petitions peaked in 1993 at 17. However, in the last 5 years, the number of staff members has averaged less than 11, a decrease of more than 35 percent. In addition to the resources not keeping pace with workload, the recognition process also lacks effective procedures for addressing the workload in a timely manner. Although the regulations establish timelines for processing petitions that, if met, would result in a final decision in approximately 2 years, these timelines are routinely extended, either because of BIA resource constraints or at the request of petitioners and third parties (upon showing good cause). As a result, only 12 of the 32 petitions that BIA has finished reviewing were completed within 2 years or less, and all but 2 of the 13 petitions currently under review have already been under review for more than 2 years. While BIA may extend timelines for many reasons, it has no mechanism that balances the need for a thorough review of a petition with the need to complete the decision process. The decision process lacks effective time frames that create a sense of urgency to offset the desire to consider all information from all interested parties in the process. BIA recently dropped one mechanism for creating a sense of urgency. In fiscal year 2000, BIA dropped its long-term goal of reducing the number of petitions actively being considered from its annual performance plan because the addition of new petitions would make this goal impossible to achieve. The BIA has not replaced it with another more realistic goal, such as reducing the number of petitions on ready status or reducing the average time needed to process a petition once it is placed on active status. As third parties become more active in the recognition process—for example, initiating inquiries and providing information—the procedures for responding to their increased interest have not kept pace. Third parties told us that they wanted more detailed information earlier in the process so they could fully understand a petition and effectively comment on its merits. However, there are no procedures for regularly providing third parties with more detailed information. For example, while third parties are allowed to comment on the merits of a petition prior to a proposed finding, there is no mechanism to provide any information to third parties prior to the proposed finding. In contrast, petitioners are provided an opportunity to respond to any substantive comment received prior to the proposed finding. As a result, third parties are making FOIA requests for information on petitions much earlier in the process and often more than once in an attempt to obtain the latest documentation submitted. Since BIA has no procedures for efficiently responding to FOIA requests, staff members hired as historians, genealogists, and anthropologists are pressed into service to copy the voluminous records needed to respond to FOIA requests. In light of these problems, we recommended in our November report that the Secretary of the Interior direct the BIA to develop a strategy that identifies how to improve the responsiveness of the process for federal recognition. Such a strategy should include a systematic assessment of the resources available and needed that leads to development of a budget commensurate with workload. The department also generally agreed with this recommendation. In conclusion, the BIA’s recognition process was never intended to be the only way groups could receive federal recognition. Nevertheless, it was intended to provide the Department of the Interior with an objective and uniform approach by establishing specific criteria and a process for evaluating groups seeking federal recognition. It is also the only avenue to federal recognition that has established criteria and a public process for determining whether groups meet the criteria. However, weaknesses in the process have created uncertainty about the basis for recognition decisions, calling into question the objectivity of the process. Additionally, the amount of time it takes to make those decisions continues to frustrate petitioners and third parties, who have a great deal at stake in resolving tribal recognition cases. Without improvements that focus on fixing these problems, parties involved in tribal recognition may look outside of the regulatory process to the Congress or courts to resolve recognition issues, preventing the process from achieving its potential to provide a more uniform approach to tribal recognition. The result could be that the resolution of tribal recognition cases will have less to do with the attributes and qualities of a group as an independent political entity deserving a government-to-government relationship with the United States, and more to do with the resources that petitioners and third parties can marshal to develop successful political and legal strategies. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information, please contact Barry Hill on (202) 512-3841. Individuals making key contributions to this testimony and the report on which it was based are Robert Crystal, Charles Egan, Mark Gaffigan, Jeffery Malcolm, and John Yakaitis.
|
In 1978, the Bureau of Indian Affairs (BIA) established a regulatory process for recognizing tribes. The process requires tribes that are petitioning for recognition to submit evidence that they have continuously existed as an Indian tribe since historic times. Recognition establishes a formal government-to-government relationship between the United States and a tribe. The quasi-sovereign status created by this relationship exempts some tribal lands from most state and local laws and regulations, including those that regulate gambling. GAO found that the basis for BIA's tribal recognition decisions is not always clear. Although petitioning tribes must meet set criteria to be granted recognition, no guidance exists to clearly explain how to interpret key aspects of the criteria. This lack of guidance creates controversy and uncertainty for all parties about the basis for decisions. The recognition process is also hampered by limited resources; a lack of time; and ineffective procedures for providing information to interested third parties, such as local municipalities and other Indian tribes. As a result, the number of completed petitions waiting to be considered is growing. BIA estimates that it may take up to 15 years before all currently completed petitions are resolved; the process for evaluating a petition was supposed to take about two years. This testimony summarizes a November report (GAO-02-49).
|
CAHs are an outgrowth of the seven-state Essential Access Community Hospital/Rural Primary Care Hospital (EACH/RPCH) program established in 1989. The BBA replaced the EACH/RPCH program with the state- administered Rural Hospital Flexibility Program (the “Flex” Program), which includes the CAH designation. The reimbursement component of the Flex Program is the responsibility of CMS. The Flex Program also includes a grant program that supports hospital participation in the program as well as state emergency medical services systems (EMS), and is the responsibility of the FORHP within the Health Resources and Services Administration (HRSA). The CAH program allows eligible rural hospitals to receive Medicare payments based on their reasonable costs rather than under a PPS. Under the Medicare inpatient PPS, hospitals are generally paid a fixed amount per patient discharge, providing an incentive for hospitals to control their costs to stay under this fixed amount because they can retain the difference between the PPS payment and their costs. Under cost-based reimbursement, hospitals are reimbursed for their reasonable costs, which does not provide the same incentive to control costs, but benefits hospitals whose Medicare costs exceed their PPS payments. In addition to receiving cost-based payment for inpatient services to Medicare beneficiaries, CAHs receive cost-based payment from Medicare for skilled nursing care provided in their swing beds and for outpatient care. To become a CAH, a hospital must meet certain criteria with respect to its location, size, patient census, and patient length of stay (see figure 1). CAHs are also subject to different health and safety regulations, known as “conditions of participation,” from other acute care hospitals. Growth in the number of CAHs has been steady (see figure 2). There is a large concentration of CAHs in the central states, although 45 states had at least one CAH as of September, 2002 (see figure 3). Since the inception of the CAH program, two factors have been important in increasing the number of hospitals qualifying for the designation. First, the length-of stay criterion was changed. Until 1999, patient stays at CAHs were limited to 4 days, after which patients would have to be transferred to another health care facility or discharged. In 1999, the Congress relaxed the criterion to require that CAHs keep their annual average length of stay to no more than 4 days. Second, states have widely utilized their authority to designate hospitals as “necessary providers,” thereby exempting such hospitals from the otherwise applicable CAH criterion that they be more than 35 miles from the nearest hospital. According to the Rural Hospital Flexibility Tracking Project (RHFTP), a little more than half of all CAHs had qualified for the CAH program through state designation rather than by meeting the mileage and location requirements, as of September 2002. Hospitals considering CAH conversion weigh numerous factors in their decision, including the impact on hospital finances and community reaction. Financial impact studies are commonly used to estimate how a hospital’s reimbursement for services would change under CAH status. The financial impact may change as Medicare reimbursements to hospitals changes. For example, Medicare payment for hospital outpatient services shifted in 2000 from cost-based payment to a new PPS for outpatient services. Because CAHs are exempt from this PPS and continue to receive cost-based payment for outpatient services, potential CAHs may factor into their decision the impact of being paid reasonable costs, rather than a fixed PPS payment, for outpatient services. They may also consider the possible reaction from the community and from other health care providers to CAH conversion. Some communities have been reluctant to support a hospital’s conversion because they perceive it as the last step before closure. In other cases, hospital officials reported that their physicians expressed concern that if a hospital became a CAH, they would occasionally be unable to admit patients to it because this would bring the CAH over the patient limit. Clinical research has indicated better outcomes for patients who are appropriately treated in inpatient psychiatric or rehabilitation facilities, such as DPUs, rather than in general acute or post acute care settings. For example, one study concluded that elderly depressed patients who were treated in specialty psychiatric DPUs may have received better treatment for their depression than similar patients who were treated in general medical wards. Another study found better outcomes among stroke patients treated in rehabilitation facilities, such as DPUs, than those treated in nursing homes. As separate sections of hospitals, psychiatric and rehabilitation DPUs are subject to specific Medicare regulations regarding the types of patients they admit and the qualifications of their staff. Psychiatric DPUs may admit only patients whose condition requires inpatient hospital care and are described by a psychiatric principal diagnosis. Rehabilitation DPUs may treat only patients likely to benefit significantly from intensive therapy services, such as physical therapy, occupational therapy, or speech therapy. Both types of DPUs must provide a specified range of services and employ clinical staff with specialized training. The Congress has required that CMS develop PPSs for both inpatient rehabilitation and inpatient psychiatric providers, including DPUs, to replace the payment methodology established by the Tax Equity and Fiscal Responsibility Act of 1982 (TEFRA). Under TEFRA, providers that had been exempted from the inpatient PPS, including inpatient rehabilitation and psychiatric hospitals and DPUs, receive the lesser of either their average cost per discharge or a provider-specific target amount. In 2002, a PPS was implemented for inpatient rehabilitation. Because a PPS for inpatient psychiatric providers has yet to be implemented, psychiatric DPUs continue to be paid under TEFRA. The financial incentives associated with TEFRA payments differ from those associated with cost-based payment. Under TEFRA, Medicare payments are capped by a provider’s target amount, giving hospitals an incentive to restrain costs. By contrast, hospitals such as CAHs, which are paid their reasonable costs, have less incentive to restrain costs because their payments can increase as their costs increase. Most existing CAHs prior to their conversion had more beds in fiscal year 1999 than CAHs are allowed. Most were likely able to reduce their bedsize to 15 (or 25 with swing beds) to become CAHs without adjusting their patient volume because their average patient census of 4.8 was generally well below the CAH limit of 15 (see table 1). Likewise, potential CAHs, on average, exceeded CAH bedsize limits in fiscal year 1999 and had a patient census well below 15. To meet the CAH limit, existing CAHs, on average, had to reduce their bedsize by less than potential CAHs would have had to if they had sought CAH status. Most existing CAHs prior to their conversion and potential CAHs were below the CAH length-of-stay limit. In fiscal year 1999, existing CAHs prior to their conversion generally experienced greater losses on their inpatient and outpatient Medicare services than did potential CAHs (see table 2), and therefore had greater financial incentive to seek conversion. A small majority, 55 percent, of existing CAHs experienced losses on inpatient Medicare services, while more than 60 percent of potential CAHs experienced gains. Nearly all hospitals in both groups experienced losses on their Medicare outpatient services. Across all revenue sources, existing CAHs prior to their conversion experienced a 0.3 percent median loss, while potential CAHs experienced a 1.8 percent median gain. The effective ban on CAHs operating DPUs may have contributed to the disparity between urban and rural areas in the availability of inpatient psychiatric and rehabilitation services in fiscal year 1999. Twenty-five existing CAHs had to close their DPU as part of becoming CAHs. Of the 93 potential CAHs that operated a DPU (one-seventh of all potential CAHs), about half lost money on their Medicare inpatient and outpatient services, giving them a financial incentive to convert. If, however, the other financial benefits associated with the DPU exceeded their combined losses on inpatient and outpatient services, these potential CAHs would have had a countervailing incentive to stay under the PPS, rather than close their DPU and convert. Some rural hospital administrators told us that, even when it was financially advantageous to seek CAH status, they were reluctant to close their DPU because it was needed to maintain access to psychiatric or rehabilitation services in the community they serve. While allowing hospitals to convert to CAH status and retain their DPU would alleviate this concern, extending cost-based reimbursement to DPUs operated by CAHs diminishes the incentives for efficiency that are inherent in PPS payments. If DPU patient stays and beds were counted against current CAH limits without any adjustment, nearly all potential CAHs with DPUs would have exceeded either the bedsize or length of stay limit in fiscal year 1999. The closure of 25 DPUs by hospitals that needed to relinquish their DPU as part of becoming a CAH may have contributed to the lower availability of inpatient psychiatric and rehabilitation services in rural areas. Inpatient psychiatric and rehabilitation providers are concentrated in urban areas, and DPUs are least common among smaller rural hospitals. Only 8 percent of rehabilitation beds and 17 percent of psychiatric beds were located in rural areas in fiscal year 1999, while about 25 percent of Medicare beneficiaries live in rural areas. In fiscal year 1999, 14 percent (93) of potential CAHs operated a DPU. By comparison, 37 percent of larger rural hospitals operated a DPU, and 53 percent of urban hospitals operated a DPU. DPUs may be less common in rural areas due to the challenge of finding the resources needed to open a DPU. Hospital representatives and officials from rural health organizations said the difficulty in finding the specialized staff required to operate a DPU likely prevents many small rural hospitals from opening a DPU. In fiscal year 1999, nearly half the potential CAHs with a DPU experienced net gains on their combined inpatient and outpatient payments for Medicare services (see table 3). These potential CAHs had a financial incentive to continue under the PPS because this allowed them to continue receiving Medicare payments that were higher than their costs, rather than being paid only their reasonable costs as a CAH. The 47 potential CAHs with DPUs that experienced losses on their combined inpatient and outpatient Medicare payments would more likely have a financial incentive to seek CAH status. Potential CAHs with DPUs can compare the financial benefits of CAH conversion to the benefits of keeping their DPUs. Some that suffered losses on their inpatient and outpatient Medicare payments may lack a financial incentive to become a CAH because DPU revenues help offset those losses. If the projected increase in revenue under cost-based payment that a hospital would receive as a CAH is lower than the loss of revenue from having to close its DPU, the hospital may chose not to convert to CAH status. Just over half of the DPUs operated by potential CAHs had net gains on their Medicare payments (see table 4). A DPU may also provide a financial benefit to the hospital because it enables the hospital to spread its fixed costs over more services. Several administrators of potential CAHs with a DPU whom we interviewed stated that their DPU had contributed positively to the hospital’s financial situation, providing a revenue source they would be reluctant to relinquish to gain CAH status. While hospitals report that the projected financial impact is generally a key factor in the decision about whether to become a CAH, some potential CAHs with DPUs also consider how local access to services would be affected if the DPU were closed. Some rural hospital administrators told us that, even when it was financially advantageous to seek CAH status, they were reluctant to close their DPU because they believed it was needed to maintain access to psychiatric or rehabilitation services in their community. Several hospital administrators and state health officials emphasized the need for patients to be near their family during treatment and the difficulty that some families would have if they had to travel outside their community to visit family members receiving treatment. Other administrators said that if their DPU closed, alternative sources for these services could be as much as 165 miles away. We were also told of difficulties in several states with referring psychiatric patients to hospitals because of a lack of available beds or because referral hospitals prefer not to take patients with significant behavioral issues or believe that psychiatric services should be provided in smaller community- based facilities. If potential CAHs were allowed to convert to CAH status while retaining their DPU, the payment methodology applied to the DPUs could remain unchanged or could be shifted to cost-based payment along with the acute care hospital services. Hospitals that have been able to keep their DPU costs below their Medicare payments under the current methodologies (rehabilitation PPS for rehabilitation DPUs or TEFRA payment for psychiatric DPUs) would likely prefer no change because they can continue to keep their net gains; hospitals that have DPU costs exceeding their current Medicare payments would likely prefer cost-based payment. If CAHs were allowed to have DPUs and the DPUs were shifted to cost- based payment, diminished incentives for efficiency could result in higher costs per case. Under cost-based reimbursement, a hospital can receive higher payments if its costs increase. Under the rehabilitation PPS or TEFRA methodologies currently applied to DPUs, their payments cannot exceed a predetermined amount, creating pressure on them to operate efficiently. If CAHs were allowed to operate DPUs and the DPU beds and patients’ length of stay were counted against the CAH limits, only one of the 93 potential CAHs with DPUs would have met both limits in fiscal year 1999. Among these 93 potential CAHs, the median bedsize of psychiatric DPUs was 11 and the median bedsize of rehabilitation DPUs was 13. If their DPU beds, acute care beds and swing beds were added together, 88 would have exceeded the CAH bedsize limit. Similarly, psychiatric inpatient stays at these potential CAHs averaged 11.8 days, and rehabilitation DPU inpatient stays averaged 13.7 days, both significantly longer than the CAH limit of an annual average of 4 days. About eighty percent of the potential CAHs with DPUs exceeded the CAH length-of-stay limit when the DPU length of stay and acute care length of stay were counted together. Hospitals we studied commonly experienced at least a small seasonal increase in their patient census, most often during winter. Such increases can be an obstacle for some hospitals considering CAH conversion if it causes them to exceed the CAH patient census limit of no more than 15 patients at any time, or length of stay limit of an average of 4 days. We found 129 potential CAHs that likely would have been able to meet the patient census limit of 15 in 1999 if not for the seasonal increase in their patient census. About 40 percent of these 129 potential CAHs, however, had positive Medicare margins, meaning they would have little financial incentive to switch from the PPS to CAH cost-based payment. In contrast to the CAH patient census limit, the patient length of stay limit is an annual average, and gives CAHs the flexibility to occasionally keep some acute care patients longer than 4 days as long as the average remains below 4. Among hospitals we studied, seasonal fluctuations in patient volume were common. In 1999, over 80 percent of potential CAHs had an increase in their patient census averaging at least one additional patient per day during a 3-month period. To assess whether this finding is consistent with small and medium-size hospitals in general, we analyzed Medicare patient claims for 2,139 hospitals with an average census of no more than 50 patients and found that about 90 percent had an increase in their patient census averaging at least one additional patient per day during a 3-month period of 1999. For nearly three-quarters of potential CAHs, the patient volume increase in 1999 occurred during the winter. This pattern was consistent with reports from hospital officials that their patient census often increased during the winter due to a higher incidence of flu and pneumonia. The seasonal increase in patient census was greater for larger potential CAHs. For example, potential CAHs with 41 to 60 beds averaged 2.8 patients more per day during their peak 3-month period, while potential CAHs with no more than 15 beds averaged 1.3 patients more per day during this period (see table 5). There were 129 potential CAHs that had at least a slight seasonal increase in 1999 that pushed them over the CAH limit of 15 acute care patients per day for some portion of the year. These 129 potential CAHs had an average daily patient census of about 13.2, with none having an annual average above 15. But these potential CAHs had an estimated average acute care patient census of 16.9 during their peak season (see table 6), nearly two patients per day higher than the CAH limit. About 40 percent of the 129 potential CAHs with seasonal increases that pushed them over the CAH patient census limit had net gains on combined inpatient and outpatient payments for Medicare services (see table 7). These potential CAHs would have a financial incentive to remain under the PPS, where they can keep the difference between payments and their costs, rather than convert to CAH status, where they would be paid only their reasonable costs. Seasonal fluctuations in patient length of stay were also common among hospitals we studied. Among the 2,139 hospitals with a patient census of no more than 50, about three-fourths had a seasonal increase in their Medicare length of stay of at least one-third of a day. Sixty-five potential CAHs had an average Medicare patient length of stay below 4 days (3.8 days) for 9 months of fiscal year 1999, but their average length of stay during the other 3 months was high enough (4.8 days) to push their Medicare annual average over the 4-day CAH limit, to 4.2 (see table 8). Among the 620 existing CAHs, 60 had an annual average length of stay greater than 4.2 days before they converted. These existing CAHs have been subject to the 4-day limit since they became CAHs, suggesting that potential CAHs with an annual average of 4.2 days would be able to remain under the limit if they converted. The relaxation of the CAH length-of-stay limit in 1999 from an absolute limit of 4 days to an annual average of 4 days has made it easier to meet because hospitals are able to keep some patients for a longer period, as long as the hospital’s annual average remains below the limit. Examples of how a hospital can manage its length of stay during the course of a year include discharging longer-stay patients to skilled nursing care in the hospital’s swing beds or transferring them to referral facilities. Administrative staff of one rural hospital considering CAH conversion reported that its average length of stay dropped over 3 years from 5.3 to 3.7 days. The decline, in their opinion, was due to factors such as utilization review, emphasis on community-based services, increased use of post-acute care, and education of staff. The ineligibility of hospitals with DPUs or with seasonal increases in patient stays that push them over a CAH limit impedes CAH conversion for some hospitals that might otherwise be able to become CAHs. The ineligibility of hospitals with DPUs may result in the loss of some rural DPU services if potential CAHs close their DPU as part of becoming a CAH. Hospitals seeking CAH status may occasionally need to transfer patients to stay under the CAH limit of 15 acute care patients if they otherwise periodically exceed 15 due to seasonal increases. Since inpatient rehabilitation and psychiatric services are less prevalent in rural areas, enabling rural DPUs to continue operating can help preserve the availability of services. In fiscal year 1999, 25 hospitals ceased operation of their DPU as part of becoming a CAH, and beneficiaries in the affected communities have lost a local provider of these services. Any of the 93 potential CAHs with a DPU may also relinquish it to convert to CAH status if hospital officials conclude that shifting to CAHs’ cost-based payment is the best way to maximize revenue and preserve the other services they offer. Among these 93 potential CAHs, 47 had net losses on Medicare services in fiscal year 1999, indicating they might benefit from CAH conversion. Because it is generally difficult for rural hospitals to staff and maintain a DPU, it is unlikely that allowing CAHs to operate DPUs would result in many existing CAHs opening new DPUs, as long as the DPUs continue to be paid under PPS and TEFRA. If DPUs operated by CAHs were paid their reasonable costs, however, DPUs would have less financial incentive to operate efficiently. The experience of rural DPUs under the new rehabilitation PPS or the forthcoming psychiatric PPS may provide information about whether Medicare payments under these PPSs will be appropriate for rural DPUs. If CAHs were allowed to operate DPUs, they would generally not be able to stay under the limits on bedsize, length of stay, and patient census if the DPU beds and patient stays were counted against current limits. Relaxing the limits for CAHs with DPUs or not counting the DPU beds or patient stays for purposes of determining whether the CAH meets the limits would enable some or all potential CAHs with DPUs to convert to CAH status. Relaxing the CAH census limit to an annual average of 15 acute care patients rather than an absolute limit of 15 would accommodate the 129 potential CAHs that exceeded the current limit due to a seasonal increase as they all had an annual average census below 15. Such a change would provide CAHs greater flexibility in their management of patient census, just as the relaxation of the length of stay limit in 1999 to an annual average of 4 days provided CAHs greater flexibility in their management of patients’ length of stay. CAHs would then not be required to transfer patients whenever they would otherwise exceed the limit, as long as they manage their census so that their annual average is below the limit. It would not be necessary to increase the number of acute care beds CAHs are allowed to maintain in order to implement this relaxation of the patient census limit. More than three-quarters of existing CAHs and potential CAHs have swing beds which they could use to accommodate additional acute care patients beyond 15, since the limit is 25 beds for CAHs with acute and swing beds. Among the 129 potential CAHs, about 60 percent had net losses on Medicare services in fiscal year 1999, indicating they might benefit from CAH conversion, while the 40 percent with net gains would less likely have the financial incentive to convert. Many potential CAHs that decide to seek CAH status would need to adjust their bedsize or length of stay to become CAHs, just as about 60 percent of existing CAHs needed to reduce their bedsize and 14 percent needed to reduce their length of stay in fiscal year 1999. CAH status and the cost- based reimbursement that goes with it have proven to be attractive enough that hospitals have been willing to make the necessary adjustments. We suggest that the Congress may wish to consider allowing hospitals with DPUs to convert to CAH status while making allowances for DPU beds, patients, and lengths-of-stay when determining CAH eligibility, and that CAH-affiliated DPUs be paid under the same formulas as other inpatient psychiatric or rehabilitation providers. We also suggest that the Congress may wish to consider changing the CAH limit on acute care patient census from an absolute limit of 15 acute care patients to an annual average of 15 to give CAHs greater flexibility in the management of their patient census. In commenting on a draft of this report, the Department of Health and Human Services said that these modifications to CAH eligibility criteria would provide the needed flexibility for some additional facilities to consider conversion to CAH status. It stated that the key is to provide the proper incentives for facilities to convert when they meet the statutory requirements and when it is the right thing to do for a particular community. HHS suggested that we further emphasize several issues regarding CAH eligibility and payment. (See app. II for the full text of HHS’s written comments.) HHS pointed out that it is important to consider that the financial incentives for efficiency under TEFRA payments to psychiatric DPUs or rehabilitation PPS payments to rehabilitation DPUs would not be preserved if CAHs were able to claim cost-based reimbursement for their DPUs, and therefore HHS said such DPUs should continue to be paid separately from the CAH. The department also emphasized that CAHs are required to meet more limited health and safety standards compared to other acute care hospitals and raised concerns that any DPUs operated by CAHs would likewise be subject to more limited health and safety standards unless the Congress acted to maintain standards currently in place for DPUs. Furthermore, HHS suggested that we analyze the extent to which inpatient rehabilitation and psychiatric services are available to rural residents beyond their local hospitals in order to determine whether such services are more or less accessible to rural residents than other specialty services. The department expressed concern that non-CAH hospitals that are within close proximity to CAHs may perceive unfair treatment if such CAHs are allowed to operate DPUs. Finally, in commenting on the relaxation of the CAH acute care patient census limit to an annual average of 15, HHS proposed that we consider suggesting corresponding changes to the CAH bedsize limit. As we noted in the draft report, incentives for efficiency that exist under the current payment systems for inpatient psychiatric and rehabilitation services would not be preserved under cost-based reimbursement. We revised the matters for congressional consideration to specifically suggest that CAH-affiliated DPUs be paid under the same formulas as other inpatient psychiatric or rehabilitation providers. We also agree with HHS that there are differences in conditions of participation between hospitals and CAHs and that appropriate health and safety standards should be maintained for CAH-affiliated DPUs, and we modified the report accordingly. However, determining what health and safety standards should be applied to the DPUs of CAHs was beyond the scope of this report. While we noted differences in the availability of inpatient rehabilitation and psychiatric services between rural and urban areas in the draft report, measuring in detail the level of access rural residents have to various specialty services was beyond the scope of this report. We believe that the close proximity of non-CAH hospitals to CAHs with DPUs would only present a fairness issue if such CAH-affiliated DPUs are paid cost-based reimbursement or if they are subject to less stringent regulations. If such DPUs operate under the same payment methodologies and regulations as other DPUs, this would not be an issue. A detailed examination of the levels of competition between CAH and non-CAH hospitals was beyond the scope of this report. We clarified in the report that we are not suggesting any changes to the CAH limits of 15 acute care beds or 25 total beds when swing beds are included, since most CAHs have swing beds that could be used when the acute care patient census exceeds 15. HHS also provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services and interested congressional committees. We will also make copies available to others upon request. In addition this report is available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please call me at (202) 512- 7119. Other major contributors are listed in appendix III. To identify potential Critical Access Hospitals (CAHs), we selected rural, non-CAH hospitals with an annual average patient census of 15 or fewer acute care patients, based on patient census figures reported in fiscal year 1999 Medicare cost reports. Any hospital that had converted to CAH status as of January 1, 2003 was excluded from the list of potential CAHs. We defined potential CAHs based on their annual average census, rather than by bedsize, because average census better represents the bed capacity a hospital would need to support its current demand for services. If potential CAHs have more beds than necessary to meet their patient demand, they can decertify beds in order to meet CAH eligibility criteria. Our inclusion of hospitals with an average census up to 15 is likely a high estimate of the number of potential CAHs. Hospitals with an annual average of 15 acute care patients per day may need more than 15 acute care beds to accommodate variations in their patient census that periodically cause them to exceed 15. From the resulting list of 683 potential CAHs, we identified hospitals operating rehabilitation or psychiatric distinct part units (DPUs), as well as those with seasonal variation in patient census or length of stay that caused them to exceed CAH limits. For our analysis of seasonal variation in patient census, we used the volume of Medicare patients as a proxy for total patient volume because national data on day-to-day variation inpatient admissions were only available for Medicare patients. We calculated from hospital cost reports the Medicare share of each hospital’s total acute care patient volume, and for each hospital multiplied the CAH limit of 15 acute care patients by its Medicare share in order to define a comparable limit based on Medicare patient stays. For example, if a hospital’s Medicare share of patients was 67 percent in fiscal year 1999, then a Medicare census of about 10 acute care patients was considered to be equivalent to a total census of 15 acute care patients. Using Medicare inpatient claims data for 1999, we defined seasonal variation in daily census as having a period of 3 consecutive months with an average census greater than the estimated limit, with the remaining nine months’ census averaging below the estimated limit. We identified 129 potential CAHs as having a seasonal increase that caused them to exceed the limit for a 3- month period, while staying under for the remaining 9 months. To estimate total patient census for these hospitals for each season, we multiplied their Medicare census by their ratio of total patients to Medicare patients. We defined seasonal variation in length of stay as having a period of 3 consecutive months with an average Medicare length of stay greater than 4 days with an average for the remaining 9 months of less than 4 days. In addition, we identified only those hospitals for which their seasonal increase in length of stay caused them to exceed the CAH limit of an average of 4 days. Because we used Medicare utilization to estimate hospitals’ total patient utilization for each season, the hospitals we identified as having seasonal variation that causes them to exceed CAH limits may not be precisely the same set of hospitals that would have been identified if claims data for all patients had been available. Rather, our analysis provides an estimate of the proportion of potential CAHs so affected. By broadly defining seasonal variation, we captured all the hospitals that have census or length of stay fluctuations around the CAH limits, regardless of the magnitude of the fluctuation. We calculated Medicare margins and total facility margins using fiscal year 1999 Medicare hospital cost report data, using methods developed jointly by the Centers for Medicare & Medicare Services (CMS) Office of the Actuary and the Medicare Payment Advisory Commission. The reported median margins are hospital-weighted, meaning that each hospital counts equally in the calculation of the median, regardless of differences in hospital size or total revenues. We interviewed officials at CMS, at the Federal Office of Rural Health Policy, and state staff administering Flex Program grants in 11 states (table 9). To get a comprehensive perspective of how current and potential CAHs are affected by CAH eligibility criteria, we also conducted an e-mail survey of all state CAH coordinators, and received e-mail responses or directly interviewed 42 out of 47. In addition, we interviewed researchers with the Rural Hospital Flexibility Tracking Project, an evaluation of the Flex Program funded by the FORHP. We interviewed administrators of 24 CAHs and potential CAHs across 10 states, and made site visits to 7 of these hospitals in 3 states. These 10 states were selected based on having significant CAH enrollment or potential enrollment, and representing different regions of the country. Jean Chung, Chris DeMars, Michael Rose, Margaret Smith, and Kara Sokol made key contributions to this report.
|
Critical Access Hospitals (CAHs) are small rural hospitals that receive payment for their reasonable costs of providing inpatient and outpatient services to Medicare beneficiaries, rather than being paid fixed amounts under Medicare's prospective payment systems. Between fiscal years 1997 and 2002, 681 hospitals have become CAHs. In the Medicare, Medicaid and SCHIP Benefits Improvement and Protection Act of 2000, GAO was directed to examine requirements for CAH eligibility, including the ban on inpatient psychiatric or rehabilitation distinct part units (DPUs) and limit on patient census, and to make recommendations on related program changes. Using fiscal year 1999 hospital cost report data, GAO identified 683 rural hospitals as "potential CAHs" based on their having an annual average of no more than 15 acute care patients per day. About 14 percent (93) of these potential CAHs operated an inpatient psychiatric or rehabilitation DPU, which they would have to close to convert to CAH status. Among existing CAHs, 25 previously operated a DPU but had to close it as part of becoming a CAH. Among the potential CAHs that operated a DPU, about half had a net loss on Medicare services, indicating they might benefit from CAH conversion. Officials in some hospitals expressed a reluctance to close their DPU, even if conversion would benefit the hospital financially, as they believe the DPU maintains the availability of services in their community. Because inpatient rehabilitation and psychiatric services are disproportionately located in urban areas, even a small number of rural DPU closures may exacerbate any disparities in the availability of these services. Using 1999 Medicare claims data, GAO found 129 potential CAHs that likely would have been able to meet the CAH census limit of no more than 15 acute care patients at any given time if not for a seasonal increase in their patient census. Seasonal increases in patient census were common among the hospitals GAO studied, generally occurring during the winter flu and pneumonia season. For most potential CAHs, their patient census was typically low enough that a small seasonal increase did not cause them to exceed CAH limits. For the 129 potential CAHs that would have had difficulty staying under the CAH limit due to seasonal variation, they could have accommodated their patient volume and had greater flexibility in the management of their patient census if the CAH census limit were changed from an absolute limit of 15 patients per day to an annual average of 15 patients.
|
As part of the Recovery Act’s State Fiscal Stabilization Fund (SFSF), Congress required Education to make grants to states that reform their education systems. Education subsequently created the RTT grant fund and gave states the opportunity to compete for grants based on reforms specified in the act: 1. recruiting, developing, rewarding, and retaining effective teachers and principals, especially where they are needed most; 2. turning around states’ lowest-achieving schools, which can include interventions such as replacing school staff, converting the school into a charter school, or closing the school; 3. building data systems that measure student growth and success and inform teachers and principals about how they can improve instruction; and 4. adopting standards and assessments that prepare students to succeed in college and the workplace and to compete in the global economy. Based mostly on these reform areas, Education identified 19 primary criteria—such as adopting common content standards or using performance data to improve teacher effectiveness—to guide the selection of states to receive the grants. Education divided the criteria into two groups: (1) “reform conditions criteria,” referring to the state’s history of and current status in implementing reforms and (2) “reform plan criteria,” referring to the state’s plans to implement new reforms. States were required to provide a narrative response for each criterion and provide performance measures and other information for selected criteria. The applications also had to include budgets and timelines for implementing certain proposed reform efforts. In short, states were to provide information not only on the extent of their experiences implementing reforms in these areas, but also on their plans for moving forward. In addition, states could demonstrate that a sufficient number of their school districts were committed to participating in their RTT reform plans by having a memorandum of understanding signed by district superintendents, school board presidents, and local union representatives. The Recovery Act requires that districts in each grantee state must receive at least 50 percent of the state’s total grant, and, according to Education, only participating districts receive these funds. States could also describe how they would work with participating districts to use RTT funds to improve student outcomes, such as increasing the rates at which students who graduate from high school are prepared for college and careers. See appendix III for more information on the criteria used to help select states for grant awards. Education conducted the RTT grant competition in two phases. Education issued proposed requirements for the RTT grant fund in July 2009, and in November 2009, the department issued final requirements and a notice inviting state governors to apply for Phase 1 of the grant. For a state to have been eligible to receive an RTT grant, Education must have previously approved the state’s applications in both rounds of SFSF grant awards. In addition, at the time they submitted their RTT applications, states could not have any legal, statutory, or regulatory barriers to linking data on student achievement or growth to teachers and principals for evaluation purposes. States had the option to apply in either phase of the competition but were only able to reapply in Phase 2 if they did not receive a grant in Phase 1. Forty-one states applied for RTT funds in Phase 1 of the competition, and all applications were reviewed and scored by external reviewers using Education’s grant award criteria. Sixteen states passed the initial review and were deemed “finalists” for the grants. In March 2010, Education announced that Delaware and Tennessee would receive grants of approximately $100 million and $500 million, respectively. Education posted all Phase 1 applications and reviewers’ scores and comments on its Web site. In April 2010, Education issued a notice inviting applications for Phase 2 of the RTT grant competition, and in August, Education announced that 10 states received Phase 2 RTT grants ranging from $75 million to $700 million. (Education was required to award all RTT grant funds by Sept. 30, 2010.) The size of each state’s award was based in part on the size of the state, among other factors. Table 1 lists RTT grantees and their award amounts. As in Phase 1, all applications and reviewers’ scores and comments were posted on Education’s Web site. Following Education’s announcement of grant recipients, states were given access to 12.5 percent of their award. This amount is approximately equal to the state portion of the first year grant amount for state-level activities only. To receive the rest of their grant funds, states had to submit, and the department had to approve, documents known as scopes of work, which were more streamlined implementation plans that updated and aligned timelines and budgets in the states’ approved applications. Education also required states to submit scopes of work from each of their participating school districts 90 days after the grants were awarded. Education reviewed and approved the state scopes of work and also reviewed the extent to which district scopes of work aligned with their respective state’s plans. Education granted states access to grant funds on a rolling basis as they approved their key documents. (See fig. 1 for a timeline of key RTT grant activities to date.) Grantee states must meet additional requirements throughout the 4-year RTT grant period. Grantees must obligate all funds by the end of their 4- year grant period and must liquidate all obligations no later than 90 days after their grant term ends. Education, however, may grant extensions for states beyond the 90 days on a case-by-case basis. Any funds not obligated and liquidated by September 30, 2015, will revert to the U.S. Treasury, according to Education officials. Also, Education required RTT grantee states, school districts, and schools to identify and share promising practices—with the federal government and the public—that result from implementing RTT projects. This requirement includes making RTT data available to stakeholders and researchers and publicizing the results of any voluntary evaluations they conduct of their funded activities. Education’s policy is to monitor grantee states to ensure they meet their goals, timelines, budgets and annual targets, and fulfill other applicable requirements. According to Education officials, the department’s monitoring plan for states emphasizes program outcomes and quality of implementation, while also ensuring compliance with RTT program requirements. They said the monitoring process for RTT grantees builds on the process that the department uses to monitor all discretionary grants. This process includes, among other things, (1) establishing working partnerships with grantees in order to effectively administer and monitor awards, (2) reviewing and approving administrative changes to grants, (3) monitoring projects for performance and financial compliance, (4) providing technical assistance and feedback to grantees on their progress, and (5) reviewing final outcomes and disseminating information about successful results. In addition, Education requires states to monitor how school districts use RTT funds. Education’s Institute of Education Sciences (IES) is conducting a series of national evaluations of RTT state grantees as part of its evaluation of programs funded under the Recovery Act. In September 2010, IES awarded two contracts to evaluate RTT implementation, outcomes, and impacts on student achievement. One evaluation will examine multiple Recovery Act programs, including RTT, and the other evaluation will focus on RTT and the School Improvement Grants program. Several briefs and reports are expected from these studies and, according to Education, the first one may be available in the summer of 2011. Officials in 6 of the states we interviewed—including 2 states that received an RTT grant and 4 states that did not receive one—reported making policy changes to reform their education systems in order to be more competitive for RTT. Those policy changes included new state legislation and formal decisions made by executive branch entities, such as the governor or state board of education (see table 2). For example, New York officials told us that their state enacted several new education reform laws to be competitive for RTT, including a law that allows school districts to partner with state-approved organizations to manage their lowest- achieving schools. California officials also told us that their state passed several laws to be competitive for RTT. California’s Governor called a special session of the legislature, during which it passed a variety of laws—such as adopting the Common Core State Standards and repealing an existing law that prohibited the use of student achievement data in decisions such as setting a teacher’s pay or deciding whether a teacher should be promoted. In contrast, officials in the other 14 states we interviewed said that their states made education policy changes during the RTT application period, but those changes were not made specifically to be competitive for an RTT grant. State officials explained that the changes their state made reflected the culmination of education reform efforts that began prior to the RTT competition. For example, Ohio enacted legislation in 2009 that required the state to set more challenging statewide academic standards, created new ways for teachers to earn their teacher’s license, and required college readiness examinations for high school students. Ohio officials said that the legislation was introduced before RTT was announced and was not an action that Ohio took to be competitive for the grant. However, they also told us that RTT being aligned with existing state policies influenced their decision to apply for the grant. Arizona officials told us that their state enacted legislation in 2010 that required a variety of changes to their K-12 education system. These changes included developing a new teacher evaluation system based on growth in student achievement and establishing a commission to set guidelines for student data collection and reporting. Arizona officials said these legislative changes would have been made regardless of RTT. In addition to making policy changes, officials in all 20 states we interviewed said they conducted outreach to a variety of stakeholders— including school district officials, state legislators, and representatives from the business community—to build support for the state’s RTT application. To demonstrate a state’s ability to implement reforms statewide, the RTT application allowed states to submit signed memoranda from school districts that agreed with the state’s reform plans. Officials in 10 states—4 grantee states and 6 nongrantee states—told us they made significant efforts to secure the participation of their school districts. For example, officials in Ohio—a state with over 1,000 school districts (including more than 300 charter school districts)—said they met with district leadership, traveled to districts for in-person meetings, and attended teacher union meetings and training sessions on RTT to build consensus around the reforms. In addition, officials in all 20 states we interviewed told us they held meetings with education stakeholder groups, such as state legislators, and members of the business community to discuss the state’s education reform plan and stakeholder roles in it. States received letters of support from many organizations and state legislators for their applications. For example, Pennsylvania reported receiving over 270 letters of support for its Phase 2 RTT application from a wide variety of individuals and groups, including some elected officials, teacher unions, and businesses. Officials in the 20 states we interviewed also told us that applying for RTT required a significant amount of time and effort. Many officials we interviewed estimated spending thousands of hours to prepare the RTT application; however, they generally did not track the total costs associated with their efforts. One state official estimated that her state spent at least 4,000 hours preparing their RTT application. Also, all 20 states we interviewed received grants to hire consultants who helped prepare the RTT applications. For example, the Bill and Melinda Gates Foundation reported funding technical assistance providers who assisted 25 states in developing their RTT applications. Each of these 25 states, including 14 of the 20 we interviewed, received consulting services worth $250,000 with these funds. With grants such as these, states hired consultants who provided a range of services, including drafting material for the application and conducting background research and analysis. State officials told us that consulting firms received between $75,000 and $620,000 for their services. According to Education officials, states commonly receive external support to apply for federal grants, such as the Teacher Incentive Fund, in an effort to leverage their resources more effectively. However, Education officials also explained that the RTT competition was more comprehensive in scope than other federal discretionary grants, which may have prompted states to seek out a greater level of external support. Many state officials reported that high- level staff from multiple state offices helped prepare the application. For example, officials in North Carolina told us that the State Superintendent of Public Instruction and the Chairman of the State Board of Education led the team that wrote the state’s application and that the Governor presented part of the state’s application to a group of peer reviewers during the application review process. While state officials told us that they had to invest a significant amount of time and effort in applying for RTT, several officials in both grantee and nongrantee states also noted that their state benefited from the collaboration and comprehensive planning that the RTT application process required. Education awarded over $3.9 billion in RTT grants to states that implement reforms in four areas: (1) developing effective teachers and leaders, (2) improving the lowest-achieving schools, (3) expanding student data systems, and (4) enhancing standards and assessments. States collectively plan to use the largest share of their $2 billion in RTT funds— nearly one-third, or $654.1 million—to improve the effectiveness of teachers and leaders. States plan to use the next largest share—nearly one- quarter, or $478.5 million—to turn around their lowest-achieving schools. The remaining funds will be spent in multiple areas in their reform plans. Officials from several states said that RTT funds will allow them to implement reforms more quickly, to serve a greater number of students, or to leverage related federal grants, such as those awarded through the Statewide Longitudinal Data Systems Grant program, to implement their reforms. See figure 2 for the distribution of RTT funds between states and school districts and, for states, by primary reform area. Several states and selected school districts plan to implement one or more of three activities under the teachers and leaders reform area, including (1) training teachers to use student performance data to improve their instruction, (2) developing systems to evaluate teacher and principal effectiveness, and (3) providing professional development to improve the skills of incoming and current teachers and school leaders. The following examples illustrate planned uses of RTT funds for these activities: Training teachers to use student performance data to improve instruction. Delaware plans to spend about $7 million to hire 29 data coaches to work with small groups of teachers to improve instruction using student performance data. These teachers will use technology- based tools called instructional improvement systems to guide them through this process. Under Delaware’s new academic assessment system, teachers will be able to make instructional changes with real- time data from student assessments that will be administered several times a year. Delaware state officials said that RTT will provide funds for data coaches in schools with limited numbers of high-need students and that they would not be able to provide these resources without the funds. (Prior to RTT, the state had been using data coaches in schools with the greatest number of high-need students.) According to Delaware state officials, the first five coaches were scheduled to start working with teachers as a pilot program in March 2011 in five districts, and by July 2011 each school in the state will have access to a data coach for two full school years. After 2 years, state officials expect that data coaches will have built enough capacity in each school district, so that district leaders can independently provide support to teachers in using the data. Developing systems to evaluate teacher and principal effectiveness. New York plans to spend approximately $2.6 million to develop and adopt a new value-added student growth model, which will measure annual changes in individual student academic performance and tie the performance to teacher evaluations. According to state officials and their RTT application, a new state law requires all classroom teachers and principals to be evaluated based in part on student data, which will include assessment results and other measures of achievement. The law also establishes annual teacher evaluations as a significant factor for employment decisions such as promotion and retention. Providing professional development to improve the skills of incoming and current teachers and school leaders. North Carolina plans to spend approximately $37 million on professional development. The state plans to work with contractors with expertise in professional development and information technology to develop, maintain, and support Web-based training on the transition to the new standards, analyzing student data, and using an instructional improvement system. North Carolina officials plan to develop training in the coming months and complete it by October 2013. According to North Carolina state officials, Web-based training will eventually be available in every school district and will help ensure that professional development materials are consistent. These officials told us that without RTT funds, they would not have been able to provide this training in every district. In addition, the state plans to spend $18.6 million to create Regional Leadership Academies that, according to North Carolina state officials, are a major part of their professional development plan. These academies will recruit and prepare principals to serve in and improve the state’s lowest-achieving schools. Several states plan to use RTT funds to give the state more authority to turn around their lowest-achieving schools, provide additional resources to those schools, or both. In particular, officials we spoke with in Tennessee are creating a statewide school district (governed by the state), and officials in Delaware, Massachusetts, and New York are working with external partners to improve their lowest-achieving schools. The states plan to provide these districts with additional resources and more flexibility in how they operate. For example, Tennessee plans to use approximately $45.6 million to create a new entity known as the “Achievement School District” to improve the state’s persistently lowest-achieving schools. According to the state’s application, to be selected for the new state-run district, schools must be (1) persistently low-achieving, as defined by the state, and (2) have attempted to restructure for at least 1 year in accordance with the state’s accountability plan under ESEA. The state will remove selected schools from governance by their home districts and appoint a district superintendent to oversee the schools. Also, Tennessee will work with consultants to determine which one of the four intervention models outlined in the RTT application—turnaround, restart, closure, or transformation— will be applied to each school in the Achievement School District in the 2011-2012 school year and to help implement the selected models. One Tennessee state official said that although the state would have created the Achievement School District without RTT funding, RTT accelerated the implementation of this reform effort. Several states plan to improve their data systems to increase access to and use of data. For example, Maryland plans to use $5 million on a 3-year project to design, develop, and implement a data system that links data on individuals as they progress from preschool through higher education and into the workforce. The data system will allow the state to conduct analyses on topics such as K-12 educational readiness and remediation and to provide this information to policymakers. The data system will also allow Maryland state officials to study key research and policy issues, such as the effect of the prekindergarten through 12th grade curriculum in preparing students for higher education, and the effectiveness of higher education in preparing students for careers after college. Maryland state officials told us they are using a combination of contractors and additional staff to implement their data projects, as well as to ensure their long-term sustainability. Several states plan to implement activities under the standards and assessments reform area to support improvements in classroom instruction. The states will (1) train teachers on the Common Core State Standards and develop curricula that are aligned with these standards, (2) develop assessments to measure instructional improvement and evaluate student knowledge and skills throughout the year, or both. The following examples illustrate planned uses of RTT funds for these activities: Training on Common Core State Standards and developing related curricula. Rhode Island plans to spend $5 million to provide professional development to teachers and principals to ensure that they understand the newly adopted common standards and how standards, curriculum, and assessments align with one another. Specifically, during the summers of 2011 and 2012, state officials plan to train 85 percent of the core teachers in urban districts and selected teachers in nonurban districts. In addition, some teachers in selected school districts, especially those with diverse student populations, will learn to develop activities that align with the common standards and use them in their schools. State officials told us that teachers will be more likely to use the assessment activities if the teachers are involved in the activities’ design. Developing assessments to improve instruction and to evaluate student knowledge and skills throughout the year. Florida plans to spend approximately $81.5 million to develop and use assessments to guide improvements in reading and mathematics instruction and to evaluate student knowledge and skills throughout the year in multiple content areas. The goals of these assessments are to enhance student learning and support the transition to more rigorous K-12 standards that build toward college and career readiness. Florida state officials said this project may also help prepare the state and districts to use assessments being developed as part of the Partnership for Assessment of Readiness of College and Careers. In addition to our interviews with grantee states and review of their plans, we interviewed officials in 8 selected states that applied for—but did not receive—RTT grants to find out whether they plan to continue their reform efforts. Officials from the 8 nongrantee states we interviewed expect to implement some of their planned reforms, even though they did not receive RTT grants; however, they told us that implementation would be slower than if they had received an RTT award and would involve using other funds: Officials in 5 of the nongrantee states reported moving ahead with plans to implement teacher evaluation systems, but at a different scale or pace than stated in their RTT applications. For example, officials in California decided to allow districts to implement the new teacher evaluations on a discretionary basis rather than implementing the evaluations statewide. Officials in Illinois told us they are moving ahead with a requirement for districts to include student academic growth in teacher evaluations. However, they noted that if the state had received the RTT grant, they would have accelerated the implementation of that requirement by two to three school years. Officials in all 8 nongrantee states we interviewed reported having to scale back or delay plans to expand state data systems, particularly those designed to provide teachers with real-time assessment data on students. For example, officials in Maine reported they are developing assessments that teachers can use to improve instruction, but without RTT funds, the assessments will not be developed as quickly. Officials in the 8 nongrantee states we interviewed told us that they still plan to implement the Common Core State Standards, but officials in 6 nongrantee states mentioned having to scale back plans to offer professional development supporting this transition. State officials in the 8 nongrantee states said they planned to implement selected reforms indicated on their RTT applications, although with a combination of other federal, state, local, and private funds. For example, a Louisiana official said the state will seek private funds to help school districts recruit new teachers and principals, as well as retain and train effective teachers and principals, particularly in the lowest-achieving schools. Officials in 9 of the 12 grantee states reported facing a variety of challenges—such as difficulty identifying and hiring qualified staff and complying with state procedures for awarding contracts—that led to several implementation delays. State officials in Massachusetts, New York, North Carolina, and Ohio encountered difficulties hiring qualified personnel to administer RTT projects. For example, officials from Ohio said they had difficulty hiring qualified people for their state-level RTT positions. They explained that when Education approved their RTT grant application in September 2010, many of the most qualified staff had already been employed in several school districts. Ohio officials added that many individuals with the skills and abilities to manage RTT activities and projects can earn higher salaries in some school districts than they can working for the state. In addition, officials in Florida, New York, and Ohio told us they encountered delays in awarding contracts. For example, New York is using $50 million of its RTT grant to develop a data system that will provide teachers with data on areas where their students may be struggling in order to help the teachers improve their instruction. The state planned to issue a Request for Proposals in December 2010 to help identify a contractor who could help develop part of the system. However, state officials told us they needed more time to develop the request because the project was complicated and required input from multiple stakeholders. State officials said they planned to issue the request by the spring of 2011, but at the time of our review, the proposal had not yet been issued. Officials in the states we visited—Delaware, New York, Ohio, and Tennessee—said they experienced other challenges that led to months- long delays in implementing 13 of 29 selected RTT projects. For example, Delaware adjusted its plan for hiring data coaches, individuals who assist teachers with understanding the results of student assessment data and help them modify their instruction. Initially, the state planned to hire 15 data coaches in January 2011 and an additional 20 beginning in September 2011, assuming the cost for each coach was $68,000. However, as they started the process of hiring coaches, state officials determined their cost estimate was insufficient to hire qualified personnel. Instead, they determined they needed about $80,000 per coach and lowered the number of total coaches to 29. Also, state officials determined it would be too disruptive to hire 15 coaches in the middle of a school year. The state decided to hire coaches between February and May 2011, with the goal of having all 29 coaches in place by September 2011. Improved planning on the part of the RTT grantees could have minimized the timeline delays that resulted from complicated state-level procurement processes or hiring challenges. Officials from three states acknowledged that at least some of their timelines were overly optimistic. Nonetheless, challenges such as these are not entirely unexpected given the amount of planning needed to assemble a comprehensive reform plan that involves numerous local entities and stakeholders. In addition to the challenges cited, Education’s review of state documentation has taken longer than anticipated, in part because of the department’s need to review changes to state plans. According to Education officials, when Phase 2 grantee states submitted their scopes of work in November 2010, they included changes to their original RTT budgets and timelines, which Education had to review and approve. For example, Education approved Massachusetts’s request to reschedule two activities in the teacher and leaders reform area from year 1 to year 2, due to hiring delays. For these reasons, Education has taken longer than it anticipated to approve state scopes of work. As of April 28, 2011, Education had approved scopes of work for 9 of the12 RTT grantee states. Department officials said they continue to work with the remaining states to complete the approval process for their scopes of work. As a result of these challenges, states have been slow to draw down their RTT grant funds. As of June 3, 2011, states had drawn down about $96 million, or 12 percent, of the year 1 total RTT grant funds totaling almost $800 million (see table 3), although Delaware and Tennessee have had access to their funds for about a year, and the other grantees have had access to their funds for several months. Education officials told us that states have the full 4-year grant period to draw down their entire grant funds. They said states that anticipate not drawing down the full amount of their year-1 budgets have requested changes to their reform plan that would allow them to make additional expenditures in later years. For example, Florida officials plan to request that Education allow them to revise their budgets and allocate some year-1 funds in their budget for year 2. In addition, some states have spent less of their grant funds than originally anticipated, to ensure that sufficient internal controls and cash management procedures were in place before requesting reimbursement. For example, an official from the District of Columbia told us that they can only make drawdowns after a payment has been made. This is due in part to the District’s status as a “high-risk” grantee, a designation applied by Education to grantees that, among other things, have experienced significant challenges administering their grants in the past. The official explained that, as of March 2011, the District had spent almost $13 million of its own funds for activities related to its RTT grant and that he expected the District to spend funds at a faster pace in the future. Education provided support to states as they have begun to implement their reform plans. For example, Education assigned program officers to each state to help determine how the department could support the grantee states as they implement their RTT plans. According to Education and several state officials, program officers talk with state officials by telephone at least once a month and review the state’s monthly progress reports to determine if the state is on schedule and on budget and to provide assistance with any state-reported issues. Program officers identify and provide support or direct state officials to appropriate sources of support for any issues associated with implementing funded activities. Program officers also answer state officials’ questions and provide guidance and support on an as-needed basis, seeking assistance from department officials when necessary. For example, Education officials told us that, after Delaware approved their school districts’ scopes of work for year 1, they approved Delaware’s request for an additional year to work with districts to update and improve their plans for years 2 through 4 of the grant period. Officials from most grantee states told us that Education generally provided helpful support after their initial grant awards. In addition to the support provided by program officers, Education created a process to allow states to make changes to their reform plans and issued and updated written guidance and other documents to help states implement RTT activities. For example, Education posted on its Web site a “frequently asked questions” document, as well as state scopes of work, award letters, final budget summaries, and amendment decision letters. Several state officials we spoke with said that having these materials on Education’s Web site is helpful. Education has also provided additional guidance on specific challenges. For example, the department helped Tennessee officials correct their indirect-cost calculations and submit a revised budget after being selected as a grantee in Phase 1. After working with Tennessee officials to make the needed changes, Education provided additional guidance on calculating indirect costs for Phase 2 applicants and made this information available for all applicants on the department’s Web site. Education has begun its process to monitor states’ progress in meeting program goals. Since the grants were awarded, the department has been tracking states’ activities and challenges by regularly communicating with states, reviewing their monthly progress reports, and reviewing other documentation, such as state scopes of work. Education’s monitoring protocol uses a common set of questions to oversee state progress and to address specific needs and challenges of each grantee. This protocol requires states to submit a progress update each month that provides information on activities selected in consultation with Education and based on their state scope of work and application. In addition, Education will hold discussions with states twice a year. Prior to these discussions, states are to provide additional information, such as any updates needed to their monthly progress reports and their assessment of the extent to which they are on track to reach their performance goals. In addition, Education plans to conduct annual, on-site reviews of RTT program operations and activities in each state and to require states to submit an annual performance report that documents their progress in achieving planned education reforms. The department plans to finalize these reporting requirements in the summer of 2011. According to Education officials, the agency plans to issue various reports based on RTT monitoring: (1) annual state-specific progress reports on RTT starting in late 2011 that will include information on implementation and performance; (2) an annual report on the progress of all 12 states collectively; and (3) a report to be issued at the end of the 4-year grant period on the overall experience, including lessons learned. In addition to federal monitoring, states will monitor school district implementation of grant activities. Education initially required state grantees to submit their school district monitoring plans within 6 months of their grants being announced. However, Education officials told us that state officials wanted to review the department’s monitoring plan before designing their own plans for school districts. In February 2011, Education informed states that their plans for monitoring districts would not be due until Education finalized its state monitoring plan. Education finalized its plan in April 2011, and all states subsequently submitted their plans. Education has taken steps to facilitate information sharing and collaboration among states. Specifically, Education is working with a contractor to provide technical assistance, such as developing a Web site through which RTT states can collaborate and hold meetings—known as communities of practice—on topics of common interest. Education officials said the secure Web site allows states to share ideas, documents, and other information. Communities of practice will address topics such as implementing new teacher evaluation systems. Education conducted two webinars in November 2010 on teacher evaluation, and in December 2010, Education convened officials from grantee states in Washington, D.C., to share guidance and challenges on the topic. Additional topics that have been covered include measuring academic growth in nontested subjects, such as music and art. Education officials said that in the future, the communities of practice will include a combination of in-person and online gatherings and will be flexible and responsive to state needs. Education is planning another meeting in the fall of 2011 for states to discuss strategies to turn around low-achieving schools. In addition, while grantee states told us they contact each other to exchange information, they said they would like more opportunities to share promising practices. According to education officials from Delaware, for example, they shared information with Rhode Island and other states about providing technical assistance to school districts to help them implement reforms at the district level. Tennessee officials told us they shared their state-level plans and their template for school district scopes of work with several Phase 2 grantees before Education published examples on its Web site. However, grantee states expressed interest in additional opportunities to share promising practices. North Carolina, Ohio, and Rhode Island plan to develop statewide data systems to improve instruction, which state officials expect will help teachers analyze their students’ performance data to better address academic material that students find difficult to understand. Officials from these states said they are interested in working with other states on developing and implementing these systems. In addition, Tennessee officials told us that once they begin implementing models to turn around low-achieving schools with their Achievement School District, they could share their experiences. They said doing so could be helpful since most states do not have experience with turning around low-achieving schools on the scale that Tennessee plans to attempt. Many nongrantee states continue to implement key reforms. However, officials from most (6 of the 8) nongrantee states we spoke with told us they were not able to access the Web site and were not aware of the Education-sponsored communities of practice. For example, an education official from Arizona said that he receives many e-mails from Education, but the department has not notified him of any plans to share practices or information about RTT. He added that he would appreciate having the opportunity to gain knowledge from grantees. Because Arizona has other federal grants, such as the School Improvement Grant for turning around low-achieving schools, he would like to know how RTT states and school districts are leveraging other federal funding sources to implement activities that align with the RTT reform areas. In addition to states’ interests in sharing information, Education has certain policies that support information sharing and collaboration. Education generally requires states and their subgrantees to make information about their RTT-funded projects and activities available to others by, for example, posting that information on a Web site identified or sponsored by Education. Education also requires all program officers responsible for administering discretionary grant programs to share program results and information about significant achievements, including the best available research and practices that could inform other projects with the public. As mentioned earlier, Education’s technical assistance network has provided grantees, but not other states, with opportunities to collaborate on topics, such as teacher evaluation. The RTT grant competition prompted a robust national dialogue about comprehensive education reform and the role of competitive grants to support these reforms. It led some states to undertake new initiatives and others to accelerate their existing and planned educational reform efforts. While it is too soon to know whether these initiatives will help close achievement gaps or significantly improve outcomes for K-12 students, the broader impact of RTT’s reform efforts may be more evident over time through, for example, Education’s impact evaluation study and other related studies. Whether the momentum around the reform initiatives and efforts to implement them can be sustained over time may depend on a number of factors, including the progress that states make as they begin to implement their reform initiatives. In addition, if state funding for K-12 education declines, states might face challenges sustaining RTT reform efforts once grant funds are no longer available. The overarching goal of RTT is to foster large-scale education reform. Sharing information with nongrantee states carrying out similar initiatives can accelerate the pace and scope of reform efforts and is a sound investment of resources. And if states are to get the greatest possible return on investment, efforts to facilitate sharing of information should begin soon. Information sharing among grantees is also important. Without opportunities for grantees to share information and experiences, states may miss opportunities to learn from each other and leverage their experiences. Although Education provided support to grantees as they began implementing their initial activities, most grantees have faced challenges meeting some interim deadlines. While states might have done a better job of anticipating some of their challenges, they were tasked with developing comprehensive reform plans requiring extensive planning and coordination with a broad array of stakeholders. Missing interim deadlines has not yet derailed states from their original reform plans. However, short-term delays could eventually lead to longer-term delays, and grantees may risk falling short of their ultimate goals. While Education has begun monitoring grantee progress, it is important that Education ensure that states meet their required timelines and receive assistance to stay on track. It is also important that Education continue to gather information from states on their challenges and respond in a timely manner. To ensure that the lessons learned from RTT are shared with all states, and not only grantees, we recommend that the Secretary of Education take the following two actions: 1. Facilitate grantees’ sharing of promising practices on key topics of interest that the department has not yet addressed, such as the design and implementation of data systems to improve instruction. 2. Provide nongrantee states with information from the department’s existing mechanisms, including the secure grantee Web site and communities of practice. We provided a draft of this report to the Secretary of Education for review and comment. Education’s comments are reproduced in appendix IV. Education agreed that it should facilitate information sharing among grantee states on topics that the department has not yet addressed, and the department said it will do so beginning in August 2011. However, while the department agreed that sharing information with nongrantees is important, it did not agree that nongrantees should have access to the secure grantee Web site or the communities of practice. As noted in its response, the department believes grantees should have more time to work together on common problems before providing access to specific information-sharing mechanisms to other states. Education also noted that it plans to make the resources and lessons learned from grantee states available to all states at some point in the future. We maintain that nongrantee states that are implementing reforms similar to those funded by RTT could benefit from the discussions grantees have and related documents they may develop. However, we modified our recommendation to acknowledge that Education can provide information from the Web site and communities of practice to nongrantees without necessarily giving them direct access to those mechanisms. Education said that it does not believe that the rate at which states are drawing down their grant funds is a reliable indicator of progress. However, we continue to believe that the relatively low amount of funds drawn down at this point is a result of challenges states have experienced to date. We highlight this issue to acknowledge the implications of—and provide context for—some of the challenges faced by grantee states as they implement the largest competitive grant program that Education has administered. Education provided us with additional information about its program review process and clarified some information related to reasons that states may have delayed spending their first year grants. We modified our report to reflect these clarifications and incorporated the department’s technical comments, where appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Education. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To address our first objective about actions states took to be competitive for Race to the Top (RTT) grants, we reviewed proposed and final requirements for the RTT grant competition, as well as documents from the U. S. Department of Education (Education), including the grant application template, scoring guidelines, and guidance materials. We reviewed RTT applications for 20 of the 47 states that applied for RTT grants, as listed in table 4. The 8 nongrantee states we selected varied in several respects, including the phase in which the state applied, the number of elementary and secondary education students in the state, and the geographic location of the state. We interviewed state education agency officials from the 20 states to review information in their RTT applications and to discuss state efforts to apply for the grant. We identified several policy decisions or legislative actions states took to be competitive for RTT grants in the four major reform areas—enhancing standards and assessments, expanding data systems, developing effective teachers and leaders, and improving states’ lowest-achieving schools. We also identified other actions states took to apply for RTT grants. To determine whether a state changed a certain policy or law to be competitive for RTT, we used the following criteria: (1) the change in law or policy occurred within the RTT application period, (2) state officials attributed the change or the effort to being a factor in applying for the RTT grant, and (3) state officials reported that the change would not have happened without the RTT competition. To describe state laws or policy changes, we relied on interviews with state officials and documentation they provided, but did not independently analyze or otherwise review state laws or policies. To describe how grantee states planned to use their RTT grant funds, we reviewed states’ RTT applications, RTT grant budgets, and scopes of work. We reviewed narrative statements in the applications in each grantee state in each of the four reform areas. We analyzed RTT grant budgets by calculating the total planned expenditures for all projects by reform area, as well as total planned expenditures for different types of budget categories. Major budget categories included personnel expenses, contracts, or state allocations to school districts. We reviewed grant draw- down amounts provided by Education. We interviewed state education officials from all 12 grantee states, including telephone interviews with 8 grantee states and site visits to 4 grantee states—Delaware, New York, Ohio, and Tennessee. We selected site visit states to provide variation across several criteria, including the grant phase in which the state applied, the number of elementary and secondary education students in the state, the geographic location of the state, and the percentage of school districts participating in the RTT application. During our site visits, we interviewed state officials and officials from three to four school districts per state. To provide a range of perspectives, we selected school districts that varied across several criteria, including the extent to which the district was mentioned in the state RTT application; whether the district was in an urban, suburban, or rural area; the percentage of high- minority schools in the district; and the percentage of high-poverty schools in the district. In total, we interviewed officials from 15 school districts, including three interviews by telephone. We interviewed officials in grantee states and districts about their planned uses for RTT grant funds, their perspectives on the benefits of their planned uses, challenges they have experienced in beginning to implement grant activities, and support provided by Education. To summarize the extent to which nongrantee states have chosen to implement reforms planned in their RTT applications, we reviewed the relevant RTT applications and interviewed state education officials from the 8 selected nongrantee states by telephone. We chose major policy actions outlined in their RTT applications and asked the nongrantees about the status of those actions. To summarize challenges that grantee states faced when implementing the RTT grants, we interviewed state education officials from all 12 grantee states, including the four site-visit states. Across the four site-visit states, we selected 29 projects for in-depth review. The projects were selected based on the amount of funding planned for the project and to ensure variation across the four reform areas. To assess how Education was responding to states’ challenges and otherwise providing support to states and planning to monitor states, we interviewed officials from the Office of Elementary and Secondary Education and the Implementation and Support Unit. We also interviewed officials in the Institute of Education Sciences about its RTT evaluation and officials in the Risk Management Service about their role in monitoring high-risk RTT states. We also reviewed relevant federal laws, regulations, and Education guidance documents, including the notice inviting applications for RTT, the final rule for the competition, the RTT application template, an internal handbook for administering discretionary grants, a document describing Education’s process for making amendments to documentation related to Education’s RTT monitoring plans, a “frequently asked questions” document, and technical assistance presentation slides and meeting transcripts. We also reviewed selected states’ monthly reports submitted to Education. These documents helped us determine the extent to which Education provided support and guidance to states during the application process and as states began to implement their grant activities. We conducted this performance audit from April 2010 to June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following table provides a description of RTT, RTT Assessment Program, and the Investing in Innovation grant funds. The following table provides the criteria Education identified for application reviewers to use as part of the process to make RTT grant awards. Elizabeth Morrison, Assistant Director, and Jason Palmer, Analyst-in- Charge, managed this assignment and made significant contributions to all aspects of this report. Jaime Allentuck, Corissa Kiyan, and Rebecca Rose also made significant contributions. Additionally, James E. Bennett, Alexander G. Galuten, Bryon Gordon, Kirsten B. Lauber, Steven R. Putansu, Kathleen van Gelder, and Sarah Wood aided in this assignment. Department of Education: Improved Oversight and Controls Could Help Education Better Respond to Evolving Priorities. GAO-11-194. Washington, D.C.: February 10, 2011. Grant Monitoring: Department of Education Could Improve Its Processes with Greater Focus on Assessing Risks, Acquiring Financial Skills, and Sharing Information. GAO-10-57. Washington, D.C.: November 19, 2009. Student Achievement: Schools Use Multiple Strategies to Help Students Meet Academic Standards, Especially Schools with Higher Proportions of Low-Income and Minority Students. GAO-10-18. Washington, D.C.: November 16, 2009. No Child Left Behind Act: Enhancements in the Department of Education’s Review Process Could Improve State Academic Assessments. GAO-09-911. Washington, D.C.: September 24, 2009. Teacher Quality: Sustained Coordination among Key Federal Education Programs Could Enhance State Efforts to Improve Teacher Quality. GAO-09-593. Washington, D.C.: July 6, 2009. No Child Left Behind Act: Improvements Needed in Education’s Process for Tracking States’ Implementation of Key Provisions. GAO-04-734. Washington D.C.: September 30, 2004. Recovery Act: Opportunities to Improve Management and Strengthen Accountability over States’ and Localities’ Uses of Funds. GAO-10-999. Washington, D.C.: September 20, 2010. Recovery Act: One Year Later, States’ and Localities’ Uses of Funds and Opportunities to Strengthen Accountability. GAO-10-437. Washington, D.C.: March 3, 2010. Recovery Act: Status of States’ and Localities’ Use of Funds and Efforts to Ensure Accountability. GAO-10-231. Washington, D.C.: December 10, 2009.
|
In the American Recovery and Reinvestment Act of 2009, Congress required the U.S. Department of Education (Education) to make education reform grants to states. Education subsequently established the Race to the Top (RTT) grant fund and awarded almost $4 billion to 12 states related to developing effective teachers and leaders, improving the lowest-achieving schools, expanding student data systems, and enhancing standards and assessments. This report, prepared in response to a mandate in the act, addresses (1) actions states took to be competitive for RTT grants; (2) how grantees plan to use their grants and whether selected nongrantees have chosen to move forward with their reform plans; (3) what challenges, if any, have affected early implementation of states' reform efforts; and (4) Education's efforts to support and oversee states' use of RTT funds. GAO analyzed RTT applications for 20 states, interviewed state officials, visited 4 grantee states, analyzed states' planned uses of grant funds, and interviewed Education officials. State officials GAO interviewed said their states took a variety of actions to be competitive for RTT grants. Of the 20 states GAO interviewed, officials in 6 said their states undertook reforms, such as amending laws related to teacher evaluations, to be competitive for RTT. However, officials from 14 states said their reforms resulted from prior or ongoing efforts and were not made to be more competitive for RTT. While officials in all 20 states told us that applying for RTT took a significant amount of time and effort, several of them also said their state benefited from the planning that the application process required. Grantees plan to use RTT grant funds to implement reforms in four areas. The largest percentage of state-level RTT funds will be used to increase the effectiveness of teachers and leaders. GAO interviewed officials in 8 nongrantee states who said they expect to continue implementing parts of their RTT plans, though at a slower pace than if they had received a grant. Most grantee states have faced a variety of challenges, such as difficulty hiring qualified personnel, that have delayed implementation. As a result, as of June 2011, about 12 percent of first-year grant funds were spent, and some projects were delayed several months. Some state officials said they expect to spend more funds soon and may seek Education's approval to reallocate some first-year grant funds into later years. Education has provided extensive support to grantee states and has begun monitoring. Education assigned a program officer to each state to assist with implementation and has developed ways for grantees to share information, such as hosting meetings on specific initiatives. Some officials from nongrantee states said they would find this information useful, but they were generally unaware of these resources or were unable to access them. GAO recommends that the Secretary of Education (1) facilitate information sharing among grantees on additional promising practices and (2) provide nongrantee states with related information. Education agreed with the first recommendation and partially agreed with the second; GAO modified that recommendation to clarify how Education can provide that information to nongrantee states.
|
PRWORA overhauled the nation’s welfare system by abolishing the previous welfare program, AFDC, and creating the TANF block grant. PRWORA established four broad goals for TANF, which included (1) providing assistance to needy families so that children may be cared for in their own homes or in the homes of relatives; (2) ending dependence of needy parents on government benefits by promoting job preparation, work, and marriage; (3) preventing and reducing the incidence of out-of-wedlock pregnancies; and (4) encouraging the formation and maintenance of two- parent families. Unlike the previous program, TANF gives states great flexibility to design programs that meet these goals. However, while states have flexibility, the programs they design must meet several federal requirements that emphasize the importance of work and the temporary nature of TANF. For example, PRWORA requires that parents receiving assistance engage in work, as defined by the state, after receiving assistance for 24 months, or earlier, at state option. In exercising their option, 28 states require immediate participation in work, and 9 other states require participation in work within 6 months of receiving cash assistance, resulting in great interstate variation in program provisions. Further, despite the programmatic flexibility authorized by TANF, states must meet federal data reporting requirements by submitting quarterly reports that include information from administrative records about those receiving welfare and those terminated from assistance, as well as an annual report, to HHS. The annual report contains information about program characteristics, such as states’ activities used to prevent out-of- wedlock pregnancy. In 1995, we reported that the block grants enacted as part of the Omnibus Budget Reconciliation Act of 1981 (OBRA) carried no uniform federal information requirements. We found that the program information states collected was designed to meet individual states’ needs and that, as a result, it was difficult to aggregate states’ experiences and speak from a national perspective on the block grant activities or their effects. Without uniform information definitions and collection methodologies, it was difficult for the Congress to compare state efforts or draw meaningful conclusions about the relative effectiveness of different strategies. In a second examination of federal block grant programs, we reported that problems in information and reporting under many block grants—the Education Block Grant, the Community Services Block Grant, and the Alcohol, Drug Abuse, and Mental Health Services Block Grant—have limited the Congress’ ability to evaluate them. However, for the TANF Block Grant, the regulations require that states submit the quarterly TANF Data Report and the TANF Financial Report or be subject to statutory penalties. For these reports, HHS provides data reporting specifications including timing, format, and definitions for such data topics as family composition, employment status, and earned and unearned income. These specifications facilitate the use of HHS’ TANF administrative data for welfare reform research by improving the data’s comparability from state to state. Several national surveys and data collected for state and local studies of welfare reform also are potential sources of data for an assessment of TANF. A number of national surveys that collect information about welfare receipt have been used in the past by researchers to analyze welfare reform or have been developed to assess current welfare reform. Four surveys—the Survey of Income and Program Participation (SIPP), the Current Population Survey (CPS), the National Longitudinal Survey of Youth (NLSY), and the Panel Study of Income Dynamics (PSID)—have been used in past research on the AFDC program and the low-income population in general. Both the SIPP and the PSID have updated their questionnaires to include questions that pertain to welfare reform specifically, including questions about the work participation requirements and penalties for not complying with these and other program rules. Moreover, two national surveys are designed specifically to answer questions about welfare reform. The U.S. Census Bureau, at the direction of the Congress, is conducting a longitudinal survey of a nationally representative sample of families, with emphasis on eligibility for and participation in welfare programs, employment, earnings, the incidence of out-of-wedlock births, and adult and child well-being. This survey, the Survey of Program Dynamics, was designed to help researchers understand the impact of welfare reform on the well-being of low-income families and children. Similarly, the Urban Institute has been conducting a multiyear project monitoring program changes and fiscal developments, along with changes in the well-being of children and families. Part of this project includes a nationally representative survey of 50,000 people called the National Survey of America’s Families (NSAF) that is collecting information on the well-being of adults and children as welfare reform is implemented. With the change in the fundamental structure of the nation’s welfare program, there have been several efforts by private research organizations to document the policies states have adopted under TANF. The Center for Law and Social Policy and the Center on Budget and Policy Priorities, in collaboration, have created the State Policy Documentation Project to document policies in all 50 states and the District of Columbia. Available on the Web, the State Policy Documentation Project contains information about state policies contained in statutes, regulations, and caseworker manuals, but it does not describe state practices. In addition, the Urban Institute has developed and made available to the public a database that documents changes in state program rules since 1996. Prior to and since TANF’s implementation, a considerable body of research about the low-income population has been conducted to examine the circumstances of families affected by welfare reform, the effectiveness of welfare reform initiatives, and the implementation of TANF at the state level. HHS has played a major role in laying the foundation for this welfare reform research. During the early 1990s, HHS granted waivers to states that allowed them to test various welfare reform provisions. In return, states were required to evaluate the effectiveness of the waiver provisions by randomly assigning welfare recipients to either participate in the waiver program or not. With the passage of TANF, states were given the option to continue their waiver evaluations as originally designed or modify the evaluation design. Several states opted to continue with their original random assignment design, while others modified their evaluation designs to focus on examining the implementation of the waivers or describe participants’ employment, earnings, and well-being. Because some elements of the waivers granted to states were incorporated into many TANF programs, the waiver evaluations provide useful insights into issues and designs for research about TANF. However, according to HHS, one aspect of waiver policies may mean that some waiver evaluations may not represent TANF requirements completely. TANF established work requirements for all adult recipients, but states could delay adhering to these requirements under their TANF program, in part or whole, if the requirements were inconsistent with state waiver policies. Under the Job Opportunities and Basic Skills Training (JOBS) program, work requirements were mandatory for a work-ready or able-bodied population, excluding a number of subgroups such as those caring for young children and the disabled. For the most part, states that continued the original random assignment design maintained some or all of the JOBS exemptions from work requirements and applied these exemptions in determining who was subject to time-limited assistance. Consequently, while these states’ waivers may incorporate other work policies prescribed under TANF, these policies would not be expected to affect the exempt population. In contrast, in states that do not claim JOBS exemptions from work requirements, all adults are subject to work requirements and time limits on assistance. Thus, while testing TANF-like policies, evaluations that continued the random assignment design may not fully reflect the experience, outcomes, or impacts of fully implemented TANF requirements. In addition to the waiver evaluations, HHS, as well as private foundations, has provided funding for demonstration programs across the country. The demonstration programs are pilot projects designed to measure the effects of a particular strategy, rather than an entire program, on welfare recipients or those eligible to receive welfare. Many of these demonstration programs were intended to increase employment, decrease out-of-wedlock pregnancy, or promote marriage. For example, in the late 1980s, several demonstration programs aimed at decreasing teen pregnancy among welfare recipients were developed. One program, the New Chance Demonstration, randomly assigned teen mothers receiving welfare to participate in a program that offered education or training classes and other support services and then compared the accomplishments of these teen mothers with those of teen mothers who did not participate in the program. Given states’ greater responsibility for welfare programs under PRWORA and the larger number of people leaving the welfare rolls, there has been general interest among program administrators and state and local policymakers about the condition of those who are no longer receiving TANF, otherwise known as “leavers.” In response to this concern, a growing body of research about leavers has been initiated at both the state and federal levels. Generally, researchers have found that once low-income families leave welfare, they become hard to keep track of. Moreover, we previously reported that studies of former TANF recipients’ status differ in important ways, including geographic scope, the time period covered, and the categories of families studied, which limits the comparability of the data across states. In order to facilitate cross-state study comparisons, the Office of the Assistant Secretary for Planning and Evaluation (ASPE) within HHS has issued guidance to states and the research community on developing comparable measures for commonly reported outcomes and defined these outcomes. In fiscal year 1998, ASPE awarded approximately $2.9 million in grants to 10 states and three large counties to study leavers, followed by additional grants in fiscal years 1999 and 2000. ASPE also has encouraged the researchers to use comparable measures. Research is also being conducted to examine the effects of welfare reform in metropolitan areas or neighborhoods. This area of research is important because the caseload decline in urban areas has been substantially lower than in other areas of the country. Moreover, urban areas can have higher unemployment rates and a greater concentration of poverty than suburban or rural communities; thus, insights gathered from these studies will be useful in understanding the potential for the success of welfare reform in the event of an economic downturn. For example, one study—the Three City Study—will survey primarily low-income, single-mother families from poor and moderate-income areas in Boston, Chicago, and San Antonio, with half of those surveyed being TANF recipients. The survey will collect information on adult and family well-being, employment, and welfare receipt three times within 4 years. Finally, a body of welfare reform research examines the implementation of TANF at the state and local levels. Since PRWORA has not only granted states greater responsibility for providing cash assistance but also changed the nature of cash assistance, it is important to learn how states and localities are coping with these changes. Much of the research about program implementation focuses on challenges faced by state, and in some cases local, administrators in implementing TANF. Typically, in this research qualitative data are collected by visiting state or local TANF agencies; reviewing program records; and interviewing agency officials, caseworkers, and clients. For example, the State Capacity Study conducted by the State University of New York, Rockefeller Institute of Government, is collecting data in 20 states about the implementation of TANF at the state level, such as the structure of government services and information systems used to track clients. Because we expect much of the reauthorization debate to focus on TANF’s four legislative goals, the framework for our data assessment was based on those goals. To assess whether data exist to address the goals, we first created a list of “descriptive” and “effect” research questions relevant to each goal. Descriptive questions concern a low-income individual’s or family’s status or behavior, such as the receipt of TANF cash assistance or support services like transportation, housing, child care, or health services; an adult’s employment status and earnings; and a family’s reliance on non- TANF government benefits, such as Food Stamps, Medicaid, or the Earned Income Tax Credit. Effect questions concern the extent to which changes in an individual’s or family’s status or behavior, such as obtaining employment, earning income, avoiding out-of-wedlock births, or forming a two-parent family, are the result of the TANF program. These research questions represent the broad issues that the Congress will consider during TANF’s reauthorization. To summarize our findings, we identified data categories associated with TANF’s goals, some of which are more narrowly focused than the research questions. The data categories represent combinations of topics we found in the data, such as employment and earnings or family and child well- being, that were associated with the research questions. Figure 1 shows the relationships among TANF’s goals, the research questions, and the data categories, several of which are associated with more than one question. We then compared the data categories with the HHS administrative data, the data collected by national surveys, and the data derived from existing and planned studies. Our assessment of the data’s usefulness for determining TANF’s progress is based on the data’s strengths and weaknesses, the design of the survey or study for which the data were gathered, and the topics to which the data related. The criteria we used in assessing the strengths and weaknesses of survey data included survey sample size, the attrition rate of respondents from whom data were collected over time, and survey response rate. For administrative data, we examined the geographic scope and the comparability of the data among states. The design features examined included what the data collection method was, whether the data were collected at one point in time or at different points in time, and whether the data were used for descriptive analysis of TANF or AFDC program recipients and their families or analysis of the program’s effects. Data that can be used for descriptive analysis are useful for research that addresses questions in the descriptive column of figure 1, and data that can be used for analyses of effect are useful for questions in the effect column of the figure. Together, national surveys, HHS administrative data, and data from state and local studies of welfare reform address TANF’s four legislative goals. The national data provide extensive information related to TANF’s goals of providing assistance to needy families and ending dependency on government benefits through job preparation, work, and marriage. State and local data not only address the same goals as the national data but in some cases also provide information related to the goals of preventing out- of-wedlock pregnancies and promoting family formation. National data provide detailed descriptive information related to two of TANF’s goals, but limited information related to TANF’s goals of preventing out-of-wedlock pregnancies and promoting family formation. HHS administrative data and the six national surveys we examined—the CPS, NLSY, NSAF, PSID, SIPP, and SPD—provide descriptive information related to TANF’s goal of providing assistance to needy families, including information about the change in size and composition of the TANF caseload and the use of noncash assistance by current and former TANF recipients (see fig. 2). National data also address TANF’s goal of ending dependence on government benefits by describing the circumstances of those receiving TANF and those who are no longer receiving TANF. HHS administrative records and national surveys provide descriptive information about TANF recipients’ participation in work activities, employment status, earnings, and other family well-being measures. HHS administrative records contain information only about whether a recipient is working and how much income that individual earns, while national surveys collect more detailed employment and earnings data, such as the types of jobs held and the hourly wage. National data are also available about family well-being measures, which provide information about how TANF’s focus on work and marriage may be changing the lives of low-income families. For instance, national surveys have information about the amount of personal income spent on health and housing, whether recipients or former recipients rent or own housing, and the well-being of children of welfare recipients. Several of the national surveys provide information about children’s school attendance or developmental status, while SIPP and SPD also collect data about the number of births to teenagers. SIPP is the only national survey we examined that contains information about whether parents have had to terminate their parental rights or give a child up for adoption. National data related to the goals of preventing out-of-wedlock pregnancy and promoting family formation are limited. While all the national data sets include information about recipients’ and nonrecipients’ marital status, only HHS administrative records contain information about out-of-wedlock births among the TANF caseload. However, states did not begin reporting this information to HHS until fiscal year 2000. Aside from information about welfare reform in general, national surveys and HHS collect information about several different groups of individuals affected by TANF, including those who remain on assistance, those who no longer receive TANF, those who are diverted from TANF, and those who are eligible but choose not to participate. HHS administrative data and all six national surveys collect data about current and former TANF recipients, but the type of information collected about these individuals differs. As figure 3 shows, only the NSAF and SIPP have data about those diverted from TANF, while the NLSY, NSAF, PSID, SIPP, and SPD have data about individuals who are eligible to receive TANF but do not. The state and local data we reviewed can be classified into four categories that complement and, in some cases, fill in gaps not covered by the national data. Waiver data come from evaluations that tested the effects of programs implemented by states under waivers approved by HHS prior to TANF. Demonstration data come from studies that tested the effectiveness of particular strategies aimed at individuals either receiving welfare or eligible to receive welfare. Leavers data come from administrative records and surveys that describe the circumstances of those who left welfare. Finally, metropolitan and community-based data come from studies that, in general, describe the circumstances of low- income families and TANF participants in specific metropolitan areas, neighborhoods, or communities. Waiver data have been used to examine the effects of TANF-like provisions on welfare recipients’ employment status, birth rates, and marital status, as shown in figure 4. Several states have been evaluating the waiver provisions in their welfare programs by randomly assigning welfare recipients to either the waiver program or AFDC. Waiver programs require participants to follow provisions that later were required or permitted under TANF, such as being required to work or risk losing eligibility for benefits or being allowed to receive welfare for only a limited time. Most of the waiver program evaluations collected data used to analyze the effect of waivers on welfare receipt, employment, and income. Data from several of the evaluations have also been used to analyze the effects of waivers on out-of-wedlock pregnancy or family formation. With the passage of PRWORA, several states incorporated their waiver provisions into their TANF program and have been collecting data about the experiences of participants in the program. Some of these states chose not to continue their evaluations as originally designed, instead conducting modified evaluations that typically involved studies that will provide information on the experience of implementing the program. For example, Montana is surveying TANF participants to collect data about the duration of their welfare receipt, the types of noncash assistance they use, and their employment. Demonstration data provide information on topics that are similar to those addressed by waiver data and have also been used to analyze the effects of programs on their participants, but demonstration data differ in two key ways. First, most demonstration data, including all data related to pregnancy prevention and family formation, were collected before PRWORA was enacted. Second, demonstration data were collected for studies focused on how a particular approach affected program participants. In fact, many of the demonstration data we examined were used entirely to assess the effects of various strategies on participants’ employment status and earnings, which helps to distinguish the effects of particular provisions included in a program like TANF. Leavers data provide descriptive information about those who have left welfare. This information includes the length of time an individual received TANF, reasons for leaving welfare, types of noncash assistance used, and employment and earnings information. In addition, some leavers data sets contain information about former recipients’ marital status, and a few have data about the number of pregnancies and births among former recipients. Metropolitan and community-based data cover some of the same issues as the other data categories, including information about TANF work requirements and time limits. Although the same issues are addressed, the data are collected in large cities or neighborhoods in order to examine the circumstances of welfare recipients in areas that may have high concentrations of poverty or limited access to jobs. In addition, metropolitan and community-based data provide information about groups other than TANF recipients and former recipients—including individuals diverted from TANF and those who are eligible to participate in TANF but do not. Although existing data provide rich information about the lives of families who are receiving or have received TANF, the strengths and weaknesses of these data affect their usefulness for understanding welfare under the TANF block grant. National data can be analyzed to gain a descriptive picture of what has happened under TANF for the nation as a whole. However, of the seven national data sets we reviewed, only two can be used to describe the well-being of families receiving TANF within individual states. Although waiver and demonstration data can be analyzed to gain information about TANF’s effects, these analyses can be done within only a limited number of states and disparate localities. We examined nearly 40 data sets that could be analyzed for information about the circumstances of former recipients. However, only a subgroup of these data sets met criteria that allowed the sample to be generalized statewide. These data sets represented 15 states. In some cases, the value of survey data collected from those who left welfare was limited because few former recipients actually responded to the surveys: in some cases, former recipients could not be located, and in other cases they chose not to answer the questions posed to them. Metropolitan and community-based data can be analyzed to describe changes over time in the lives of welfare recipients in urban centers. Much of this data collection will continue beyond 2001. The strength of the national data is that they were collected from samples selected randomly from the nation’s population and include low-income families and TANF recipients in numbers sufficient to allow reliable estimates about these groups. In addition, most of the national data were collected for the same individuals over time, allowing changes in welfare recipients’ employment, earnings, and well-being to be tracked across programs implemented at different times. However, all the national surveys have participants who drop out of the survey sample over time, and this may limit how well the samples represent the nation’s welfare recipients. National data are collected from random samples that contain low-income families and TANF recipients. Because samples from national surveys are selected randomly, they are, at the time of selection, representative of the population at large, including the welfare population. In addition, all the national data sets we reviewed have sample sizes large enough to allow reliable estimates about the nation’s low-income and TANF populations— as sample size increases, the degree of precision of the estimates made using that sample also increases (see table 1). As shown in figure 5, two national data sources collect data on individuals at one point in time; others collect data on the same individuals across time. In both cases, the data can be used for comparisons between groups of individuals living under welfare provisions implemented at different time periods. Five national surveys—the CPS, NLSY, PSID, SIPP, and SPD—collect data from the same individuals over time. For the SIPP, the Census Bureau, after a specified period, changes the group of individuals from whom data are collected. For example, the 1993 SIPP panel followed a group of individuals through 1996. In 1996, a new group was randomly selected and followed through 2000. Data collected over time could be analyzed to describe how people cycle on and off TANF, how their use of benefits changes over time, and how their family well-being changes. In addition, comparisons could be made between groups covered by different welfare provisions. For example, AFDC recipients included in the 1993-96 SIPP panel could be compared with TANF recipients who were part of the 1996-2000 SIPP panel. The NSAF, as well as HHS administrative records, has collected data from different samples of individuals in different years. For example, in 1997 one group of people completed the NSAF; another group completed the survey in 1999. In cases such as these, the samples from different years can be compared with each other to look for changes across time. For those national surveys that collect information about changes in welfare across time, the likelihood that survey participants will drop out over time increases, potentially affecting how well the data actually represent all members of the nation’s low-income and TANF populations. In general, the greater the attrition rate, the less likely a sample is to be representative of the larger population from which it was drawn. Those who have continued participating in the survey may be different from those who stopped or dropped out. As surveys that collect data over time, the NLSY, PSID, SIPP, and SPD all have experienced sample loss, as shown in table 2. Concerns about attrition are especially significant for the SPD, because it was designed specifically to track welfare recipients from AFDC through TANF. Census has tried mathematically adjusting available responses to compensate for the survey’s sample loss, but this adjustment has not sufficiently remedied the problem, according to a Census official. Census will take steps to lessen attrition through intensive follow-up with survey dropouts to enlist their participation and through the use of monetary incentives for future respondents to the survey. For national surveys, the response rate—the number of people in the survey sample who actually responded, compared with those who were asked to respond but did not—has been large enough to allow the survey results to be generalized beyond those who completed the survey, with the exception of the 1999 NSAF. Most practitioners of survey research, including GAO, require at least a 70- to 75-percent response rate before survey data can be generalized beyond those who completed the survey. As table 3 shows, the response rate for all the national surveys except the 1999 NSAF was at or above the 70-percent standard. Given the survey’s response rate, using the 1999 NSAF survey data would require determining whether patterns in who responded and who did not respond existed and what this means for how well respondents represent the original sample. For those surveys that collect data on the same individuals over time, response rates sometimes are considered in conjunction with rates of attrition. The major limitation of most existing national data is that they cannot be used for state-level analyses. In general, national data sources have state sample sizes that are too small to allow reliable generalizations about TANF recipients within individual states. The NLSY, PSID, SIPP, and SPD collect data not from states per se, but from regions that, in some cases, include more than one state. Thus, while these data can be analyzed to provide a descriptive picture of TANF for the nation, they cannot be used within states for descriptive analyses or to analyze the effects of states’ TANF provisions. This does not mean that researchers do not use these data sources for state-level analyses. For example, some researchers combine several years of CPS data to obtain adequate sample sizes within states for state-level analyses. However, Census, which administers the CPS, SIPP, and SPD, does not recommend using data from these surveys for state-level analyses, because doing so when sample sizes are small may produce findings that are not reliable. Two national data sources, HHS administrative records and the NSAF survey, can be used for state-level analysis, but with limitations. HHS administrative records provide data from all 50 states and the District of Columbia. However, the reporting requirements for these data are not completely standardized across states, so that how a variable is defined may vary among states. For example, each state may define the work or work-related activities in which TANF recipients participate as they think appropriate to the state program. Like HHS administrative records, NSAF survey data can be used for state- level analyses. NSAF has samples large enough to allow state-level analyses in 13 states, representing 58 percent of the fiscal year 1999 national TANF caseload; this is not the case in the 37 remaining states. For example, the number of low-income children surveyed for the 1997 NSAF ranged from a low of 760 to a high of 1,813 in each of the 13 states where NSAF collected samples large enough to permit state-level analysis. However, the number of low-income children surveyed in the 37 remaining states averaged 35 per state, a number too small to allow reliable conclusions about the children of TANF recipients in any of these states. Even if the issue of sample sizes within states were resolved, obstacles to using the national data to analyze TANF’s effects within states would still exist. The lack of information about the choices states have made about TANF policies and program rules has been identified as one of the challenges to using national data to analyze TANF’s effects. However, research organizations have collected this information. The Center for Law and Social Policy has worked with the Center on Budget and Policy Priorities to document policies in all 50 states and the District of Columbia, and the Urban Institute has developed a state database that documents state program rules. Yet, even with this information, using national data to measure state-level effects poses challenges. The first challenge is deciding with whom TANF recipients should be compared. To test TANF’s effects, the employment, earnings, and well- being of individuals in the program must be compared with those of individuals who are not in the program. In the case of TANF, it would be difficult to determine what group should provide the point of comparison. Because waivers introduced TANF-like policies and program rules while AFDC was still in effect, it would be difficult to select a group of welfare recipients whose experiences with welfare were not influenced by TANF. The second challenge is determining the effect of any single welfare provision given the multiple provisions that make up states’ TANF programs. For example, TANF recipients are required to work, and states must impose penalties or sanctions when recipients do not comply with work requirements. In such cases, it would be difficult to separate the combined effects of work requirements and any penalties or sanctions that were imposed into the individual effects of each. A third challenge is detecting the long-term effects of state programs that have been recently implemented. Although PRWORA was enacted in 1996, states implemented their TANF programs at different points in time. Some states were still refining their TANF programs at the beginning of 1998. Consequently, the long-term effects of TANF may not yet be realized. Finally, state-level analyses may not be the best way to measure TANF’s effects in every state. Some states have further devolved TANF to localities, and different localities may implement a state’s TANF provisions differently. In total, 17 states have given local governments responsibility for TANF program design and implementation. The strength of the waiver and demonstration data is that they can be used to analyze TANF’s effects, but with few exceptions these data were collected from city and county samples rather than statewide samples. (See app. II for the localities examined.) Most of the waiver and demonstration data were collected as part of experiments—studies that randomly assigned welfare recipients to groups that were subject to different welfare provisions. Experiments, when done correctly, are recognized as the most rigorous way of determining the extent to which an observed outcome can be attributed to the program itself, rather than to differences among the program participants. Over half of the waiver data sets and virtually all of the demonstration data sets we reviewed consisted of data from experiments. Of the waiver data sets, about half were collected from city and county samples, with the others being collected from statewide samples. All of the demonstration data sets were collected from city and county samples. Overall, 6 of the 54 waiver and demonstration data sets that could be used for analyses of effect were collected from statewide samples. According to the project directors of two waiver evaluations, the high cost of conducting rigorous program evaluations may explain, in part, why data sets used to analyze TANF’s effects tend to use samples from cities and counties and not entire states. Given limited resources, researchers may choose to conduct rigorous evaluations in selected cities or counties rather than sacrifice rigor to evaluate a program statewide. Data sources we reviewed for both the Vermont and Iowa waiver evaluations mentioned budget constraints as a factor that led researchers to limit their data collection efforts. Another limitation of the waiver and demonstration data is that most often they were collected prior to the implementation of TANF. This is not surprising given that in many cases the waiver provisions and the demonstration projects were intended to test provisions before they were adopted and implemented. However, the provisions tested may not have been those ultimately adopted by the state. Finally, in almost all cases in which waiver evaluations and demonstration projects collected survey data, response rates were above the 70-percent standard (see table 4). The strength of the leavers data is that in most cases, they were collected from statewide samples. However, in some cases, leavers data collected using surveys may not be representative of a state’s leaver population. Although we reviewed nearly 40 leavers data sets, on the basis of the type of data available, response rates, and the absence of significant differences between survey respondents and nonrespondents, we concluded that state- level analyses could be done for 15 states using the data sets we examined. To be representative of a state’s leavers population, survey data need to meet the 70-percent standard for response rates, or, through a comparison of survey respondents with nonrespondents, show that the two groups do not differ significantly. When a state has both administrative data and survey data available, the administrative data could be used in place of survey data that are not representative. As figure 6 shows, Arkansas, Florida, Georgia, North Carolina, and South Carolina have either survey data that meet the standard for response rates or data from survey respondents who were not significantly different from nonrespondents. Arizona, Colorado, the District of Columbia, Illinois, Kansas, Missouri, Virginia, Washington, and Wisconsin have both administrative data and survey data. The response rate for the District of Columbia, Illinois, Kansas, Virginia, and Wisconsin was below 70 percent, but for Virginia, a comparison of respondents with nonrespondents revealed no significant differences between the two groups. Although New York has no survey data, its administrative data provide information about the state’s leavers. California, Massachusetts, and Texas are the three states for which, given the available data, state-level analyses of leavers cannot be done. We previously reported that eight leavers studies covering seven states had collected adequate information to allow the study findings to be generalized to the states’ welfare populations. Thus 4 states—Indiana, Maryland, Oklahoma, and Tennessee—can be added to the list of 15 states we identify in figure 6 as having data that can be generalized statewide. In appendix II we list all the sources we reviewed that provide data on those who have left welfare. Some researchers may wish to compare those who left TANF with those who left AFDC on outcomes such as employment, earnings, and well-being. Contrasting outcomes for these two groups would require deciding which AFDC leavers provide the best point of comparison. Many factors specific to the year in which recipients left the welfare rolls would influence their employment prospects, wages, and well-being. For example, labor markets and economic conditions in a given year would influence former recipients’ employment opportunities. Historical influences such as these would complicate the issue of selecting a comparable group of AFDC leavers and TANF leavers. The strength of the metropolitan and community-based data is that they can be used in descriptive analyses that provide information about how the lives of low-income families and TANF participants have changed over time. Because data collection is occurring over time, in some cases it has yet to be completed. For example, the Los Angeles Family and Neighborhood Study (LA FANS) is collecting data about participation in welfare programs from residents of 65 neighborhoods in Los Angeles County over a 4-year period. LA FANS began data collection in January 2000 and will continue data collection through 2004. Most of the materials we reviewed regarding metropolitan and community-based data sets did not report information about attrition rates. When response rates were reported, they were above the 70-percent standard. Figure 7 shows the time periods for which the data are or will be available for different metropolitan areas and communities. Three of the metropolitan and community-based data sources have measures that can be used to analyze TANF’s effects, even though the data were not collected as part of an experiment. For example, data from the Fragile Families study can be used to examine TANF’s effects by drawing comparisons between the 3,675 unmarried parents and the 1,125 married parents who compose the survey sample in cities with populations over 200,000. Data collection for Fragile Families began in 1998 and will continue through 2004. The data have already been used to examine differences in relationship quality between married and unmarried couples, including whether a father gave money to or helped a mother in a nonmonetary way during pregnancy. The current body of research on TANF addresses many issues of interest to the Congress but does not provide a comprehensive national picture of TANF. However, existing national data and data from state and local studies could be pieced together to develop a descriptive picture of what has happened to TANF participants in all 50 states. In addition, within a limited number of states and various cities and counties, existing data can be used to conduct analyses of TANF’s effects. National survey data can be used with data from HHS administrative records for descriptive analyses of TANF’s progress nationwide. HHS administrative data can be used for analyses within each of the 50 states, and national survey data can be analyzed for national trends. These analyses could be compared to examine the extent to which the employment experiences, for example, of current and former TANF recipients in individual states conform with or depart from the experiences of such individuals identified with national survey data. This comparison could be extended to the individual states and localities covered by the NSAF data, waiver and demonstration data, leavers data, and metropolitan and community-based evaluation data. While piecing the data together in this way would build on their strengths, each data type still has limitations. Specifically, national survey data provide national samples useful for comparing the lives of welfare recipients covered by welfare provisions implemented at different times. However, attrition or low response rates may affect the degree to which these samples represent all members of the nation’s low-income and TANF population. Within each of the 50 states, HHS administrative data can be analyzed to gain insight into current recipients’ use of noncash benefits, among other things, but the lack of standardized reporting requirements would complicate comparisons across states. Supplemental descriptive analyses for individual states can be done using NSAF survey data, leavers data, waiver and demonstration data, and metropolitan and community- based data. In addition, like the national survey data, many of these data represent multiple measures over time. However, these analyses in many cases can be generalized only to cities and counties and not to entire states. Existing data can also be analyzed to gain information about TANF’s effects. Although the 1997 and 1999 NSAF survey samples do not include pre-TANF welfare recipients, the samples do include other populations, such as low-income families who do not participate in TANF, whose employment, earnings, and well-being can be compared with those of TANF recipients, assuming adequate sample sizes for both groups. Moreover, because NSAF has sample sizes in 13 states large enough to allow state-level analyses, the employment, earnings, and well-being of TANF recipients in those states can be considered in relation to the state’s TANF programs and policies. However, using the NSAF data for such analyses would require resolving the challenges to analyzing effects described earlier in this report. Similarly, although most of the metropolitan and community-based evaluation samples do not include pre- TANF welfare recipients, other populations represented in the study samples could be compared with TANF recipients. Finally, waiver and demonstration data can be analyzed to gain information about TANF’s effects, keeping in mind that this information is about the effects of programs and provisions often implemented prior to TANF and implemented in cities and counties rather than entire states. The data available for addressing TANF’s goals will provide useful information, but with some limitations. Given the costs, some limitations may be difficult to overcome. Our examination of the data raised three issues. First, for a comprehensive assessment of TANF, it is important to have data for a representative sample of TANF recipients and nonrecipients that allow for analyses of effect at the state level. The federal government has made an investment in national surveys, which either in whole or in part are intended to gather information about the lives of TANF recipients. One of these, the SPD, was funded as a means to gather data about TANF recipients. For another, the SIPP, the Census Bureau added a special section of questions about welfare and reworded questions so that they would better capture respondents’ participation in state programs. However, even with these efforts, none of Census’ surveys currently being administered can be used for state-level analyses of TANF’s effects because of small sample sizes within individual states. In addition, the SPD has a high attrition rate. The Census Bureau plans to take steps to improve response to the SPD through intensive follow-up with survey dropouts to enlist their participation and through monetary incentives for future respondents to participate in the survey. However, the issue of small sample sizes at the state level will remain unresolved. Second, HHS has encouraged state agencies to study the effects of their TANF programs through the AFDC waiver requirement for experimental studies and subsequent research initiatives. Moreover, our examination of data indicates that, because of the variability in TANF program provisions across states, analysis of TANF’s effects at the state and local levels can be done with the greatest confidence. However, even when conducted at the state and local levels, studies designed to examine TANF’s effects tend to be costly, time-consuming, and impractical to implement in every state. In some cases, conducting an evaluation for an entire state is determined to be so expensive that data collection is limited to a portion of the state. For example, the evaluation of Vermont’s waiver program focused on 6 of 12 welfare service districts. The evaluation’s 42-month follow-up survey was administered to only these 6 district offices and, owing to cost constraints, included a subset of the sample for whom administrative records, rather than survey responses, were collected. Policymakers, federal and state officials, and the welfare reform research community will need to seek ways to balance the need for information about TANF’s effects with the resource demands of rigorous studies. Third, both qualitative and quantitative data may be needed to understand what has happened to former TANF recipients. Leavers are a difficult population to track, and, in some cases, using multiple methods of quantitative data collection has not necessarily increased the number of former recipients who could be located or who responded to surveys. In fact, in some of the studies we reviewed, the low rate of success in gathering data from these individuals makes the data’s usefulness questionable. Surveys that used only one mode of data collection, such as telephoning former TANF recipients, generally had the lowest response rates. Some leavers’ studies followed telephone surveys with personal interviews of those who could not be reached by phone or who did not respond. However, even the use of multiple modes of data collection did not always ensure high response rates. Given the difficulties inherent in collecting quantitative data from this group, other data collection strategies that use local communication networks to identify families as well as interviews of respondents in their homes may be needed to gain information about the lives of TANF leavers. In commenting on a draft of this report, HHS said that the report will be of help to the Congress and other interested parties. In its technical comments, HHS expressed concern that in highlighting the importance of statewide samples, we understated the value of data from local samples. In response to this concern, we have noted in the report not only that findings from local samples are important but also that, in some cases, they provide data only recently available from national surveys. We concur with HHS that a sample need not be statewide in order for findings to be useful. However, we have emphasized the value of data that can be generalized to the state level because of the Congress’ interest in a picture of TANF’s progress nationwide. HHS’ comments appear in appendix IV. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Honorable Tommy G. Thompson, Secretary of Health and Human Services; appropriate congressional committees; and other interested parties. We also will make copies available to others on request. If you or your staff have any questions about this report, please contact me on (202) 512-7202 or David D. Bellis on (415) 904-2272. Another GAO contact and staff acknowledgments are listed in appendix V. This appendix discusses in more detail our scope and methodology for identifying, selecting, and assessing studies and surveys that might provide data to help researchers as they seek to describe what has happened to recipients of Temporary Assistance for Needy Families (TANF) and to estimate the effect of welfare reform on them. Because no comprehensive list of data sources for welfare reform research exists, we used a judgmental sampling method for our assessment of data resources. We began our work by examining six key critiques of welfare reform research that had been issued, in draft or final form, by the fall of 1999. The six critiques listed in figure 8 both gave us insight into issues that will probably arise in assessing TANF and identified studies that may be potential sources of data for an assessment of TANF. We started the development of a list of data sources from three of the critiques—the Research Forum’s report and its related on-line database, the National Research Council’s interim report, and Peter Rossi’s paper. To ensure that this list was comprehensive, we consulted with officials at the Department of Health and Human Services (HHS) about important bodies of work in the welfare reform research field. We also conducted follow-up interviews with HHS project officers and experts in the welfare reform research community to ensure that we had identified the most relevant national surveys and studies, particularly those that might have data about family, marriage, and pregnancy issues. As a result of these discussions and an examination of the original list, we designed a judgmental sample of potential data sources for welfare reform research that included the following categories: national surveys and HHS’ TANF administrative data; studies that collected data about the major TANF subpopulations in three or more states or municipalities; studies of TANF leavers; HHS’ waiver evaluations; and studies listed on the websites of HHS’ Administration for Children and Families (ACF), HHS’ Office of the Assistant Secretary for Planning and Evaluation (ASPE), and the Welfare Information Network of the Finance Project. We then began to develop lists of the surveys and studies in each of the sample’s categories. The national surveys included in our list were the Current Population Survey (CPS), the National Longitudinal Survey of Youth (NLSY), the National Survey of America’s Families (NSAF), the Panel Study of Income Dynamics (PSID), the Survey of Income and Program Participation (SIPP), and the Survey of Program Dynamics (SPD). We used information from ASPE and from the National Conference of State Legislatures to identify leavers studies sponsored by HHS or states. Similarly, we used information from ACF to ensure that our list contained the body of research funded by ACF that focused on waivers implemented by state welfare agencies prior to TANF’s authorization. As we added items to the list, we continually checked to avoid any duplication. This comparison involved our judgment, as some lists were of projects or studies and others were of study reports. Because we relied on multiple reviews of the body of work undertaken in the welfare reform research community, we believe that the list of 443 entries we compiled included the key sources of data. We selected surveys and studies systematically from this list within each sample category. We were interested in surveys or studies that were as comprehensive as possible in geographic coverage and topics addressed. Thus, we selected all of the national surveys and the HHS administrative data. We also selected all studies on the original list that by their description appeared to have produced data concerning the major subpopulations affected by TANF in three or more states, municipalities, or counties. This resulted in 55 studies and surveys. We then selected studies that pertained to individual states in the following way. First we selected all leavers studies financed by ASPE. Of the leavers studies listed by the National Conference of State Legislatures and those mentioned in an article authored by Brauner and Loprest, we included only those that had not been included in our previous report or were not from a state that already had an ASPE-funded study. In states that had issued multiple reports for their leavers studies for people who left welfare in different years, we selected the most recent study. When a state had no ASPE-funded study or any listed by the National Conference of State Legislatures or Brauner and Loprest, but did have a report available on its Web site, we selected the Web report. Waiver studies generally produced several reports. We selected for review the most recently issued waiver report because the data topics examined were similar in the initial and later reports. After selecting these types of studies and surveys, we removed from our list studies that did not appear to contain data that could answer our research questions or that used data from one of the national surveys on our list. In summary, we excluded literature searches, reviews of research on state policies or programs, technical assistance projects focused on improving or evaluating information systems or databases, and studies based on data from a national survey that we had included in our list. A list of 239 studies remained. Finally, we obtained advice from five welfare experts about which of these 239 studies we should include. Ultimately, we selected 17 of these studies. In all, we judgmentally selected 141 national surveys and studies that yielded 187 data sets to review. A complete list of the national surveys and studies that we examined for data is provided in appendix II. Identifying data resources for a comprehensive assessment of TANF required criteria that could be used to assess data sets. The first step in this process was to express each of TANF’s goals as a research question. In looking at the goals themselves, it is evident that some express expected results—for example, that work and marriage will improve the well-being of low-income families. Assessing TANF’s progress toward these expected results required, in part, questions about TANF’s effects. However, some of TANF’s goals focus on its general purpose—for example, providing assistance to needy families. In this case, assessing TANF’s progress required research questions that are descriptive, that is, questions that ask what public assistance looks like under TANF. To translate TANF’s goals into research questions, we considered the nature of each of TANF’s goals and formulated questions to represent key issues the Congress will consider at reauthorization. As shown in figure 1, we created corresponding questions that asked for descriptions of what has happened under TANF, the effects of TANF, or both. We then specified the information, or data topics, necessary to address our research questions. We developed a data collection instrument that listed the data topics associated with each question and used the instrument to record the data topics found in each data set examined. It is important to note that what we identified as data topics were not equivalent to specific measures. In other words, our coding captured the fact that a certain data source collected measures on employment. It did not capture the specific manner in which employment was measured. In addition to data topics, we collected such pertinent information as the unit of analysis, population, sampling method, sample size, dates covered by data collection, and design of the study for which data were gathered. We recorded response rates and attrition rates when they were relevant given the method of data collection. We also looked to see if data had been or were being collected for a comparison or control group. To summarize our findings, we identified data categories related to TANF’s goals, some of which represented the research questions and others of which were more narrowly focused. The narrowly focused data categories represented combinations of data topics, such as employment and earnings or family and child well-being, that were associated with the research questions. We took this approach for a variety of reasons. First, in making a judgment that data were available to address particular questions, we required that certain data topics be present in combination and, for effect questions, that the data were collected using control groups or comparison groups. However, a data source could provide relevant data topics, even though the data topic could not be used to address the particular question we had posed. Rather than discount the value of these data topics, we decided to note their availability. Second, in many cases, the same data topics and data categories were being used to address different questions. For example, as figure 1 shows, the data categories associated with employment were related to 5 of our 11 questions. Presenting our findings in terms of data categories allowed us to report on all of the data topics, including those that were not available in the combinations needed to address a research question. Finally, to assess how the data might be used for an assessment of TANF, we considered three attributes of the data. We considered the geographic scope of the sample; the data topics included in the data set; and whether or not the data could be used for descriptive analyses or analyses of effect, given the design of the study. In determining the geographic scope of the sample, we looked at the sampling method and sample size, as well as at response rates and attrition rates, since both affect how well a sample represents a population. We relied on the design of the study, the data topics included in a data set, and how researchers had used the data to make a judgment about whether the data could be used for descriptive analyses or analyses of effect. We coded data as being useable for analyses of effect when they came from a study that made comparisons between groups, one of which served as the treatment group and the other as the absence of the treatment, or the comparison group. In deciding whether a study included a treatment and a comparison group, we recognized that such groups could be formed through experimental designs, quasi- experimental designs, or statistical modeling. Because this assessment is based on a judgmental sample and the data needs of an assessment of TANF’s progress are derived from TANF’s legislative objectives, several study limitations should be considered. First, while every attempt was made to be comprehensive in sample design and selection, some relevant data sources may have been omitted. Second, framing the data needs for an assessment of TANF’s progress around TANF’s objectives, which focus on the behavior and well-being of low- income children and families, excluded from consideration the bodies of welfare reform research concerned with institutions, including studies of TANF’s implementation at the state and local levels and descriptions of TANF program policies and practices. Third, the study’s focus on identification of quantitative data resulted in our eliminating data from most studies that used qualitative data collection methods. Fourth, because our bibliographic sources for surveys and studies included both existing and planned surveys and studies, complete documentation for data sets was not always available. Finally, because our coding focused on whether a certain data source collected measures on specific topics, but not on the precise measures used, we did not assess whether measures were comparable across studies. The Family Transition Program: Implementation and Three-Year Impact of Florida’s Initial Time-Limited Welfare Program Administrative Survey Iowa’s Family Investment Program: Impacts During the First 3-½ Years of Welfare Reform Administrative Reforming Welfare and Rewarding Work: Final Report on the Minnesota Family Investment Program Administrative Surveys (2) In addition to those named above, the following individuals made important contributions to this report: Patrick DiBattista designed the data collection instrument used to assess the 187 data sets reviewed, oversaw data collection, and designed and conducted the analysis of the data's strengths and weaknesses; Andrea Sykes played a major role in data collection and developed the analysis of the data's availability to address TANF's goals; Stephen Langley III also played a major role in data collection, provided consultation on multivariate analysis issues, and prepared the report's methodology appendix; James Wright provided guidance on study design and measurement; and Gale Harris, Kathryn Larin, and Heather McCallum provided consultation on TANF policy and implementation issues. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
|
GAO commented on the federal government's ability to assess the goals of the Temporary Assistance for Needy Families (TANF) program using national, state, and local data. These data address the goals to differing degrees. National data, which includes data collected in national surveys and information that all states report to the Department of Health and Human Services (HHS), include extensive information on TANF's two goals of providing assistance to needy families and ending dependency on government benefits but have limited information on promoting family formation. The data pertain to such issues as changes in TANF workloads, recipients' participation in work activities, employment status and earnings, and family well-being. Although there are national data on the incidence of out-of-wedlock births and marriage among TANF recipients and other low-income families, these data include only very recently available information on states' strategies to prevent out-of-wedlock pregnancies or promote family formation. Studies of welfare reform at the state and local levels contain the same kind of information as national data, but they also include information about areas very recently covered by national data. Much of these data come from waiver evaluations--evaluations conducted in states that experimented with their welfare program, under a waiver from HHS, prior to TANF. The usefulness of existing data for assessing TANF's progress varies. In general, the need for information about TANF's progress will have to be balanced against the challenges of rigorous data collection from the low- income population.
|
The U.S. export control system is about managing risk; exports to some countries involve less risk than to other countries and exports of some items involve less risk than others. Under United States law, the President has the authority to control and require licenses for the export of items that may pose a national security or foreign policy concern. The President also has the authority to remove or revise those controls as U.S. concerns and interests change. In 1995, as a continuation of changes begun in the 1980s, the executive branch reviewed export controls on computer exports to determine how changes in computer technology and its military applications should affect U.S. export control regulations. In announcing its January 1996 change to HPC controls, the executive branch stated that one goal of the revised export controls was to permit the government to tailor control levels and licensing conditions to the national security or proliferation risk posed at a specific destination. According to the Commerce Department, the key to effective export controls is setting control levels above the level of foreign availability of materials of concern. The Export Administration Act (EAA) of 1979 describes foreign availability as goods or technology available without restriction to controlled destinations from sources outside the United States in sufficient quantity and comparable quality to those produced in the United States so as to render the controls ineffective in achieving their purposes. Foreign availability is also sometimes associated with the indigenous capability of foreign sources to produce their own HPCs, but this meaning does not meet all the EAA criteria. The 1996 revision of HPC export control policy removed license requirements for most HPC exports with performance levels up to 2,000 MTOPS—an increase from the previous level of 1,500 MTOPS. For purposes of export controls, countries were organized into four “computer tiers,” with each tier after tier 1 representing a successively higher level of concern to U.S. security interests. The policy placed no license requirements on tier 1 countries, primarily Western European countries and Japan. Exports of HPCs above 10,000 MTOPS to tier 2 countries in Asia, Africa, Latin America, and Central and Eastern Europe would continue to require licenses. A dual-control system was established for tier 3 countries, such as Russia and China. For these countries, HPCs up to 7,000 MTOPS could be exported to civilian end users without a license, while exports at and above 2,000 MTOPS to end users of concern for military or proliferation of weapons of mass destruction reasons required a license. Exports of HPCs above 7,000 MTOPS to civilian end users also required a license. HPC exports to terrorist countries in tier 4 were essentially prohibited. The executive branch has determined that HPCs are important for designing or improving advanced nuclear explosives and advanced conventional weapons capabilities. It has identified high performance computing as having applications in such national defense areas as nuclear weapons programs, cryptology, conventional weapons, and military operations. According to DOD, high performance computing is an enabling technology for modern tactical and strategic warfare and is also important in the development, deployment, and use of weapons of mass destruction. It has also played a major role in the ability of the United States to maintain and increase the technological superiority of its warfighting support systems. HPCs have particular benefits for military operations, such as battle management and target engagement, and they are also important in meeting joint warfighting objectives like joint theater missile defense, information superiority, and electronic warfare. However, the executive branch has not, with the exception of nuclear weapons, identified how or at what performance levels, countries of concern may use HPCs to advance their own military capabilities. The House Committee on National Security in December 1997 directed DOE and DOD to assess the national security risks of exporting HPCs with performance levels between 2,000 and 7,000 MTOPS to tier 3 countries. In June 1998, DOE concluded its study on how countries like China, India, and Pakistan can use HPCs to improve their nuclear programs. According to the study, the impact of HPC acquisition depends on the complexity of the weapon being developed and, even more importantly, on the availability of high-quality, relevant test data. The study concluded that “the acquisition and application of HPCs to nuclear weapons development would have the greatest potential impact on the Chinese nuclear program—particularly in the event of a ban on all nuclear weapons testing.” Also, India and Pakistan may now be able to make better use of HPCs in the 1,000 to 4,000 MTOPS range for their nuclear weapons programs because of the testing data they acquired in May 1998 from underground detonations of nuclear devices, according to the DOE report. The potential contribution to the Russian nuclear program is less significant because of its robust nuclear testing experience, but HPCs can make a contribution to Russia’s confidence in the reliability of its nuclear stockpile. An emerging nuclear state is likely to be able to produce only rudimentary nuclear weapons of comparatively simple designs for which personal computers are adequate. We were told that DOD’s study on national security impacts has not been completed. We attempted to identify national security concerns over other countries’ use of HPCs for conventional weapons development. However, officials from DOD and other relevant executive branch agencies did not have information on how specific countries would use HPCs for missile, chemical, biological, and conventional weapons development. Based on EAA’s description of foreign availability, we found that subsidiaries of U.S. companies dominate overseas sales of HPCs. According to U.S. HPC exporters, there were no instances where U.S. companies had lost sales to foreign HPC vendors in tier 3 countries. The U.S. companies primarily compete against one another, with limited competition from foreign suppliers in Japan and Germany. We also obtained information on the capability of certain tier 3 countries to build their own HPCs and found it to be limited. Tier 3 countries are not as capable of producing machines in comparable quantity and of comparable quality and power, as the major HPC-supplier countries. The only global competitors for general computer technology are three Japanese companies, two of which compete primarily for sales of high-end computers—systems sold in small volumes and performing at advanced levels. Two of the companies reported no exports to tier 3 countries, while the third reported some exports on a regional, rather than country basis.One German company sells HPCs primarily in Europe but has reported a small number of sales of its HPCs over 2,000 MTOPS to tier 3 countries. One British company said it is capable of producing HPCs above 2,000 MTOPS, but company officials said it has never sold a system outside the European Union. Our findings in this regard were similar to those in a 1995 Commerce Department study of the HPC global market, which showed that American dominance prevailed at that time, as well. The study observed that American HPC manufacturers controlled the market worldwide, followed by Japanese companies. It also found that European companies controlled about 30 percent of the European market and were not competitive outside Europe. Other HPC suppliers also have restrictions on their exports. Since 1984, the United States and Japan have been parties to a bilateral arrangement, referred to as the “Supercomputer Regime,” to coordinate their export controls on HPCs. Also, both Japan and Germany, like the United States, are signatories to the Wassenaar Arrangement and have regulations that generally appear to afford levels of protection similar to U.S. regulations for their own and for U.S.-licensed HPCs. For example, both countries place export controls on sales of computers over 2,000 MTOPS to specified destinations, according to German and Japanese officials. However, foreign government officials said that they do not enforce U.S. reexport controls on unlicensed U.S. HPCs. A study of German export controls noted that regulatory provisions specify that Germany has no special provisions on the reexport of U.S.-origin goods. According to German government officials, the exporter is responsible for knowing the reexport requirements of the HPC’s country of origin. We could not ascertain whether improper reexports of HPCs occurred from tier 1 countries. Only one German company reported several sales to tier 3 countries of HPCs over 2,000 MTOPS, and U.S. HPC subsidiaries reported no loss of sales due to foreign competition. Officials of U.S. HPC subsidiaries explained that they primarily compete for sales in local markets with other U.S. HPC subsidiaries. None of these officials identified lost HPC sales to other foreign vendors in those markets. Further, none claimed to be losing sales to foreign vendors because of delays in delivery resulting from the subsidiary’s compliance with U.S. export control regulations. Because some U.S. government and HPC industry officials consider indigenous capability to build HPCs a form of foreign availability, we examined such capabilities for tier 3 countries. Based on studies and views of specialists, we found that the capabilities of China, India, and Russia to build their own HPCs still lag well behind those of the United States, Japan, and European countries. Although details are not well-known about HPC developments in each of these tier 3 countries, most officials said and studies show that each country still produces machines in small quantities and of lower quality and power than U.S., Japanese, and European computers. For example: China has produced at least two different types of HPCs, the Galaxy and Dawning series, both based on U.S. technology and each believed to have an initial performance level of about 2,500 MTOPS. Although China has announced its latest Galaxy’s capability at 13,000 MTOPS, U.S. government officials have not confirmed this report. India has produced a series of computers called Param, which are based on U.S. microprocessors and are believed by U.S. DOE officials to be capable of performing at about 2,000 MTOPS. These officials were denied access to test the computers’ performance. Over the past 3 decades Russia has endeavored to develop commercially viable HPCs using both indigenously developed and U.S. microprocessors, but has suffered economic problems and lacks customers. According to one DOE official, Russia has never built a computer running better than 2,000 MTOPS, and various observers believe Russia to be 3 to 10 years behind the West in developing computers. Commerce and DOD each provided one set of general written comments for both this report and our report entitled, Export Controls: Information On The Decision to Revise High Performance Computer Controls (GAO/NSIAD-98-196, Sept. 16, 1998). Some of those general comments do not relate to this report. Therefore, we respond to them in the other report. General comments relevant to this report are addressed below. Additional specific comments provided by Commerce on this report are addressed in appendix II. In its written comments, Commerce said that the report’s scope should be expanded to better reflect the rationale that led to the decision to change computer export control policy “from a relic of the Cold War to one more in tune with today’s technology and international security environment.” This report responds to the scope of work required by Public Law 105-85 (Nov. 18, 1997), that we evaluate the current foreign availability of HPCs and their national security implications. Therefore, this report does not focus on the 1995 decisions by the Department of Commerce. Our companion report, referred to above, assesses the basis for the executive branch’s revision of HPC export controls. Commerce commented that our analysis of foreign availability as an element of the controllability of HPCs was too narrow, stating that foreign availability is not an adequate measure of the problem. Commerce stated that this “Cold War concept” makes little sense today, given the permeability and increased globalization of markets. We agree that rapid technological advancements in the computer industry have made the controllability of HPC exports a more difficult problem. However, we disagree that foreign availability is an outdated Cold War concept that has no relevance in today’s environment. While threats to U.S. security may have changed, they have not been eliminated. Commerce itself recognized this in its March 1998 annual report to the Congress, which stated that “the key to effective export controls is setting control levels above foreign availability.” Moreover, the concept of foreign availability, as opposed to Commerce’s notion of “worldwide” availability, is still described in EAA and Export Administration Regulations as a factor to be considered in export control policy. Commerce also commented that the need to control the export of HPCs because of their importance for national security applications is limited. It stated that many national security applications can be performed satisfactorily on uncontrollable low-level technology, and that computers are not a “choke point” for military production. Commerce said that having access to HPCs alone will not improve a country’s military-industrial capabilities. Commerce asserted that the 1995 decision was based on a variety of research leading to the conclusion that computing power is a secondary consideration for many applications of national security concern. We asked Commerce for its research evidence, but it cited only a 1995 Stanford study used in the decision to revise HPC export controls. Moreover, Commerce’s position on this matter is not consistent with that of DOD. DOD, in its Militarily Critical Technologies List, has determined that high performance computing is an enabling technology for modern tactical and strategic warfare and is also important in the development, deployment, and use of weapons of mass destruction. High performance computing has also played a major role in the ability of the United States to maintain and increase the technological superiority of its war-fighting support systems. DOD has noted in its High Performance Computing Modernization Program annual plan that the use of HPC technology has led to lower costs for system deployment and improved the effectiveness of complex weapon systems. DOD further stated that as it transitions its weapons system design and test process to rely more heavily on modeling and simulation, the nation can expect many more examples of the profound effects that the HPC capability has on both military and civilian applications. Furthermore, we note that the concept of choke point is not a standard established in U.S. law or regulation for reviewing dual-use exports to sensitive end users for proliferation reasons. In its comments, DOD stated that our report inaccurately characterized DOD as not considering the threats associated with HPC exports. DOD said that in 1995 it “considered” the security risks associated with the export of HPCs to countries of national security and proliferation concern. What our report actually states is that (1) except for nuclear weapons, the executive branch has not identified how and at what performance levels specific countries of concern may use HPCs for national security applications and (2) the executive branch did not undertake a threat analysis of providing HPCs to countries of concern. DOD provided no new documentation to demonstrate how it “considered” these risks. As DOD officials stated during our review, no threat assessment or assessment of the national security impact of allowing HPCs to go to particular countries of concern and of what military advantages such countries could achieve had been done in 1995. In fact, an April 1998 Stanford study on HPC export controls also noted that identifying which countries could use HPCs to pursue which military applications remained a critical issue on which the executive branch provided little information. The Arms Control and Disarmament Agency (ACDA) provided oral comments on this report and generally agreed with it. However, it disagreed with the statement that “according to the Commerce Department, the key to effective export controls is setting control levels above the level of foreign availability of materials of concern.” ACDA stressed that this is Commerce’s position only and not the view of the entire executive branch. ACDA said that in its view (1) it is difficult to determine the foreign availability of HPCs and (2) the United States helps create foreign availability through the transfer of computers and computer parts. The Departments of State and Energy had no comments on a draft of this report. Our scope and methodology are in appendix I. Commerce’s and DOD’s comments are reprinted in appendixes II and III, respectively, along with an evaluation of each. We conducted our review between December 1997 and June 1998 in accordance with generally accepted government auditing standards. Please contact me on (202) 512-4128 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IV. Section 1214 of the Fiscal Year 1998 National Defense Authorization Act (P.L. 105-85) required that we review the national security risks relating to the sale of computers with a composite theoretical performance of between 2,000 and 7,000 millions of theoretical operations per second (MTOPS) to end users in tier 3 countries. Accordingly, we examined the executive branch’s actions to assess the risks of these sales. As required by the act, we also reviewed the foreign availability of computers with performance levels at 2,000 to 7,000 MTOPS and the impact on U.S. exporters of foreign sales of these computers to tier 3 countries. To determine the executive branch’s actions to assess or analyze the national security risks of allowing high performance computers (HPC) to be provided to countries of proliferation and military concern, we reviewed the Department of Defense (DOD) and the Department of Energy (DOE) documents on how HPCs are being used for nuclear and military applications. We discussed high performance computing for both U.S. and foreign nuclear weapons programs with DOE officials in Washington, D.C., and at the Lawrence Livermore, Los Alamos, and Sandia National Laboratories. We also met with officials of the DOD HPC Modernization Office and other officials within the Under Secretary of Defense for Acquisition and Technology, the Office of the Secretary of Defense, the Joint Chiefs of Staff, and the intelligence community to discuss how HPCs are being utilized for weapons design, testing and evaluation, and other military applications. Additionally, we met with DOD and Institute of Defense Analyses officials to discuss the basis for identifying high performance computing on the Militarily Critical Technologies List, a compendium of technologies identified by DOD as critical for maintaining U.S. military and technological superiority. We also reviewed intelligence reports on the use of high performance computing for developing weapons of mass destruction. To determine foreign availability of HPCs, we reviewed the Export Administration Act (EAA) and the Export Administration Regulations for criteria and a description of the meaning of the term. We then reviewed market research data from an independent computer research organization. We also reviewed lists, brochures, and marketing information from major U.S. and foreign HPC manufacturers in France (Bull, SA), Germany (Siemens Nixdorf Informationssysteme AG and Parsytec Computer GmbH), and the United Kingdom (Quadrics Supercomputers World, Limited), and met with them to discuss their existing and projected product lines. We also obtained market data, as available, from three Japanese HPC manufacturers. Furthermore, we met with government officials in China, France, Germany, Singapore, South Korea, and the United Kingdom to discuss each country’s indigenous capability to produce HPCs. We also obtained information from the Japanese government on its export control policies. In addition, we obtained and analyzed from two Commerce Department databases (1) worldwide export licensing application data for fiscal years 1994-97 and (2) export data from computer exporters provided to the Department for all American HPC exports between January 1996 and October 1997. We also reviewed a 1995 Commerce Department study on the worldwide computer market to identify foreign competition in the HPC market prior to the export control revision. To identify similarities and differences between U.S. and foreign government HPC export controls, we discussed with officials of the U.S. embassies and host governments information on foreign government export controls for HPCs and the extent of cooperation between U.S. and host government authorities on investigations of export control violations and any HPC diversions of HPCs to sensitive end users. We also reviewed foreign government regulations, where available, and both foreign government and independent reports on each country’s export control system. To obtain information on the impact of HPC sales on U.S. exporters, we interviewed officials of American HPC firms and their subsidiaries and U.S. and foreign government officials. The following are GAO’s comments on the Department of Commerce’s letter, dated August 7, 1998. Commerce provided one set of written comments for this report and for a companion report, in which we discuss our analysis of the basis for the 1995 executive branch decision to revise export controls for HPCs. We addressed Commerce’s general comments relevant to this report on page 9 and its specific comments below. 1. Commerce stated that one key to effective export controls is setting control limits of items of concern above that which is widely available throughout the world. However, this wording is a change that contrasts with documentary evidence previously provided to us and to the Congress. In successive Export Administration Annual Reports, the Commerce Department stated that “the key to effective HPC export controls is setting control levels above foreign availability. . .” In addition, Commerce has provided us with no empirical evidence to demonstrate the “widespread availability” of HPCs, either through suppliers in Europe and Asia or a secondary market. 2. Commerce commented that a number of foreign manufacturers indigenously produce HPCs that compete with those of the United States. Our information does not support Commerce’s position on all of these manufacturers. For example, our visit to government and commercial sources in Singapore indicated that the country does not now have the capabilities to produce HPCs. We asked Commerce to provide data to support its assertion on foreign manufacturers, but it cited studies that were conducted in 1995 and that did not address or use criteria related to “foreign availability.” As stated in our report, we gathered data from multiple government and computer industry sources to find companies in other countries that met the terms of foreign availability. We met with major U.S. HPC companies in the United States, as well as with their overseas subsidiaries in a number of countries we visited in 1998, to discuss foreign HPC manufacturers that the U.S. companies considered as providing foreign availability and competition. We found few. Throughout Europe and Asia, U.S. computer subsidiary officials stated that their competition is primarily other U.S. computer subsidiaries and, to a lesser extent, Japanese companies. In addition, although requested, Commerce did not provide documentary evidence to confirm its asserted capabilities of India’s HPCs and uses. 3. Commerce stated that worldwide availability of computers indicates that there is a large installed base of systems in the tens of thousands or even millions. Commerce further stated that license requirements will not prevent diversion of HPCs unless realistic control levels are set that can be enforced effectively. While we agree, in principle, that increasing numbers of HPCs makes controllability more difficult, as our recommendation in our companion report suggests, a realistic assessment of when an item is “uncontrollable” would require an analysis of (1) actual data, (2) estimated costs of enforcing controls, and (3) pros and cons of alternatives—such as revised regulatory procedures—that might be considered to extend controls. Commerce did not perform such an analysis before revising export controls in 1995. In addition, although we requested that Commerce provide documentary evidence for its statement that there is a large installed base of HPCs in the millions, it did not provide such evidence. 4. Commerce stated that most European governments do not enforce U.S. export control restrictions on reexport of U.S.-supplied HPCs. We agree that at least those European governments that we visited hold this position. However, although requested, Commerce provided no evidence to support its statement that the government of the United Kingdom has instructed its exporters to ignore U.S. reexport controls. The following is GAO’s comment on DOD’s letter dated July 16, 1998. 1. DOD provided one set of written comments for this report and for a companion report, in which we discuss our analysis of the basis for the 1995 executive branch decision to revise export controls for HPCs. We addressed DOD’s comments relevant to this report on page 8. Hai Tran The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a legislative requirement, GAO reviewed the efforts by the executive branch to determine national security risks associated with exports of high performance computers (HPC). GAO noted that: (1) the executive branch has identified high performance computing as having applications in such national defense areas as nuclear weapons programs, cryptology, conventional weapons, and military operations; (2) however, except for nuclear weapons, the executive branch has not identified how and at what performance levels specific countries of concern may use HPCs for national defense applications--an important factor in assessing risks of HPC sales; (3) a Department of Energy study on nuclear weapons was completed in June 1998; (4) the study shows that nuclear weapons programs in tier 3 countries (which pose some national security and nuclear proliferation risks to the United States), especially those of China, India, and Pakistan, could benefit from the acquisition of HPC capabilities; (5) the executive branch has only recently begun to identify how specific countries of concern would use HPCs for nonnuclear national defense applications; (6) to date, a Department of Defense study on this matter begun in early 1998 is not completed; (7) with regard to foreign availability of HPCs, GAO found that subsidiaries of U.S. computer manufacturers dominate the overseas HPC market and they must comply with U.S. controls; (8) three Japanese companies are global competitors of U.S. manufacturers, two of which told GAO that they had no sales to tier 3 countries; (9) the third company did not provide data on such sales in a format that was usable for GAO's analysis; (10) two of the Japanese companies primarily compete with U.S. manufacturers for sales of high-end HPCs at about 20,000 millions of theoretical operations per second (MTOPS) and above; (11) two other manufacturers, one in Germany and one in the United Kingdom, also compete with U.S. HPC suppliers, but primarily within Europe; (12) only the German company has sold HPCs to tier 3 countries; (13) Japan, Germany, and the United Kingdom each have export controls on HPCs similar to those of the United States, according to foreign government officials; (14) because there is limited competition from foreign HPC manufacturers and U.S. manufacturers reported no lost sales to foreign competition in tier 3 countries, GAO concluded that foreign suppliers of HPCs had no impact on sales by U.S. exporters; (15) in addition, Russia, China, and India have developed HPCs, but the capabilities of their HPCs are believed to be limited; (16) thus, GAO's analysis suggests that HPCs over 2,000 MTOPS are not available to tier 3 countries without restriction from foreign sources.
|
The OPM 2004 FHCS results and OPM’s 2005 follow-up focus group discussions suggest that information is not cascading effectively from top leadership throughout the organization. Further, according to the summary reports of OPM’s follow-up focus group discussions, overall communication was selected by employees as one of the most important areas to address. Some focus group participants said that managers and employees were unaware of what is going on in the organization due to a lack of internal and cross-divisional communication. Focus group participants also described not knowing where the agency is heading and not having a clear understanding of how their activities aligned with the overall vision and mission of the agency. As figure 1 shows, fewer employees below the SES level at OPM as well as the rest of government reported being satisfied with the information they receive. Further there were significantly fewer employees at OPM, especially in the GS-1 to GS-12 range, reporting “satisfaction with the information they receive from management on what’s going on in the organization” when compared with the rest of the government. On the other hand, significantly more SES employees at OPM indicated satisfaction with the “information they were receiving from management” than SES employees at all the other government agencies participating in the 2004 FHCS. A similar gap between OPM SES and GS-level employees, as well as for their relative counterparts from the rest of government, is evident when employees were asked if they agreed that “managers promote communication among different work units.” OPM employees also expressed concerns regarding their views of senior leaders. As shown in figure 1, roughly two-thirds of OPM employees, as well as employees in the rest of government, indicated that their immediate supervisors or team leaders are doing a good or very good job. Employee perceptions of senior level leadership were not as positive, however. When survey respondents were asked if they agreed with the statement “I have a high level of respect for my organization’s senior leaders,” nearly twice as many OPM SES employees agreed with this statement as compared with OPM GS-level employees. Survey respondents were also asked if they were “satisfied with the policies and practices of your senior leaders” and OPM SES employees also agreed with this statement more than twice the level of OPM GS-level employees. For both items, the percent of OPM GS-level respondents agreeing with these statements tends to be lower than for their counterparts in the rest of government. A similar pattern of OPM SES and OPM GS-level response can be seen in Figure 1 for the percent of employees agreeing with the statement “leaders generate high levels of motivation and commitment in the workforce.” OPM’s analysis of responses to this question by its divisions and offices show that the Human Capital Leadership and Merit System Accountability (HCLMSA) division had the lowest positive and largest negative response of any division at about 28 percent and 51 percent respectively. This issue of leaders generating motivation and commitment was selected by all six of the HCLMSA focus groups as one of the most important issues that OPM needs to address. Because the HCLMSA division is OPM’s frontline organization that partners with agencies to achieve human capital success by providing oversight and leadership to agencies, it will play a key role in OPM initiatives to implement human capital reform—so it will need effective leadership to guide its transformation. OPM is clearly aware of the most critical issues for its agency leaders to address, such as the lack of overall and cross-divisional communication, issues related to employee views of senior management, and obtaining employee input to individual work plans linked to the agency strategic plan. Based on OPM’s May 2006 action plans, the agency is planning to improve communication through such means as “visits to OPM field locations, brown bag lunches with the Director, an email box where employees can make suggestions on more efficient and effective ways of doing business, Web Casts, and employee meetings.” According to the May 11, 2006 memo from OPM’s CHCO to Director Springer, OPM has released several messages to employees regarding steps that it will be taking to improve communications agencywide and to address each of the specific critical issues within individual organizations of the agency. OPM officials told us that many of these actions have already occurred, such as senior executives visiting field locations. To improve its cross-divisional communication, OPM has developed and posted a functional organization directory on its internal website, which it has accomplished almost a month ahead of schedule. To address employee concerns regarding views of senior leaders, OPM is establishing a process in all divisions to solicit employee input on various initiatives and setting aside “open door” time for employees to speak with their managers. Furthermore, OPM has created an action plan to help employees better understand how their work fits into the overall mission of the agency by providing a mechanism to increase employee input to work plans related to its strategic plan. As I have testified on many occasions, in recent years GAO has learned a great deal about the challenges and opportunities that characterize organizational transformation. Several such lessons are of particular relevance to today's discussion. For example, GAO has recognized that soliciting and acting on internal feedback such as that obtained through employee surveys, provides a key source of information on how well an organization is developing, supporting, using and leading staff, as well as how internal operations are functioning and meeting employee needs as they carry out their mission. OPM's practices in this area are based in part on GAO’s experience and include efforts to gain insight into employee perceptions of leadership and explicit follow-up activities to address identified concerns. OPM’s planned actions are important steps in the right direction. Moving forward, as OPM implements its action plans to address issues of communication and motivation, it is important that it frequently communicate with employees on the progress of each of its planned actions and how these changes will affect them. OPM should also communicate any challenges or delays faced in its planned actions as soon as possible and the reasons why any changes to plans might be made. The 2006 FHCS deployed just last month, will provide an initial indication of the extent to which the new initiatives are responding to employee concerns. A high-performance organization needs a dynamic, results-oriented workforce with the requisite talents, multidisciplinary knowledge, and up- to-date skills to ensure that it is equipped to accomplish its mission and achieve its goals. We have reported that acquiring and retaining a workforce with the appropriate knowledge and skills demands that agencies improve their recruiting, hiring, development, and retention approaches so that they can compete for and retain talented people. Similar to other agencies, OPM faces challenges in recruiting and retaining a high-quality, diverse workforce and these challenges could limit OPM’s capacity to accomplish its current mission, which includes in part leading other agencies in addressing their own recruitment and retention challenges. Further, if OPM is to lead governmentwide human capital reform and transition from less of a rulemaker, enforcer, and independent agent to more of a consultant, toolmaker, and strategic partner, it should identify the skills and competencies of the new OPM, determine any skill and competency gap, and develop specific steps to fill that gap. The FHCS shows that OPM employees identified several issues related to its current workforce: Workforce skills. Some OPM employees were concerned about a lack of skills among OPM’s current workforce. Our analysis of the 2004 FHCS shows that 67 percent of OPM employees agreed that “the workforce has the job relevant knowledge and skills necessary to accomplish organizational goals” compared with 74 percent of employees from the rest of government. Among OPM’s divisions, HCLMSA had the lowest rate of agreement and highest rate of disagreement with the above statement at, respectively, 25 percent and 59 percent. This division provides leadership to agencies in their human capital transformation efforts. If HCLMSA lacks the knowledge and skills necessary to accomplish OPM’s current organizational goals, the division may have difficulty managing the additional responsibilities of leading and implementing future governmentwide human capital reform. Agencies are also concerned with OPM’s current workforce capacity. We spoke with agency CHCOs, HR directors, and their staffs about OPM’s current capacity, and they expressed concern about whether OPM has the technical expertise needed to provide timely and accurate human capital guidance and advice. For example, agency officials said that the perceived lack of federal human resource expertise among some OPM Human Capital Officers (HCO) makes it difficult for them to assist agencies when communicating policy questions to appropriate OPM employees. For example, an HR director told us that their agency contacted the responsible HCO about the Outstanding Scholars Program and did not get a response from OPM for two to three weeks. When OPM finally responded, they said each agency was deciding how to administer the program. In the end, the agency’s General Counsel Office had to contact another agency to learn how they administered the program. Many CHCOs and human resource directors told us they believed that OPM’s expertise has declined over the last decade, while noting that OPM is facing many of the same personnel issues as all federal agencies regarding the loss of federal human capital talent and institutional knowledge. OPM’s ability to lead and oversee human capital management policy changes that result from potential human capital reform legislation could be affected by its internal capacity and ability to maintain an effective leadership team, as well as, an effective workforce. CHCOs and human resource directors expressed concern about the loss of OPM employees with technical expertise that will be needed to effectively assist agencies with future human capital efforts. One CHCO believed OPM’s capacity is dependent upon a few key employees, in particular in the area of innovative pay and compensation approaches, adding that the potential loss of these employees could create a tipping point that severely damages OPM’s capacity. Moreover, agencies believed that the Departments of Defense (DOD) and Homeland Security human capital reform efforts severely taxed OPM technical resources, specifically pay and compensation employees. Building the skills and knowledge of its workforce provides OPM with an opportunity to streamline decision making to appropriate organization levels. The FHCS includes one question on employee empowerment. The 40 percent of OPM employees who had a “feeling of personal empowerment with respect to work processes” was close to the response of 43 percent from the rest of government. Although these results do not differ markedly from those in the rest of government, this item was selected by a majority of participants in the focus groups as one of the most important issues that OPM needs to address. Some participants said decision making is too centralized at the top without delegating authority to managers, supervisors, and employees. Taken together, these survey and focus group results suggest that the majority of OPM employees do not feel empowered to accomplish their tasks. Having delegated authorities gives employees the opportunity to look at customer needs in an integrated way and effectively respond to those needs and can also benefit agency operations by streamlining processes. Furthermore, such delegation to frontline employees gives managers greater opportunities to concentrate on systematic, cross-cutting, problems or policy-level issues. In April 2006, OPM began taking steps to delegate more authority to lower-level employees, and Associate Directors are now currently reviewing redelegations within their organizations. Recruiting. Similar to most federal agencies, OPM may have difficulty recruiting new talent. For example, 47 percent of OPM employees who perform supervisory functions agreed with the statement that their “work unit is able to recruit people with the right skills,” which is similar to the 45 percent of supervisors from the rest of government. The OPM CHCO told us that HR specialist positions are difficult to fill now. The work of HR specialists ranges across policy development, consultation and agency outreach, and operational recruitment and staffing activities. This is noteworthy because we identified HR specialist as a mission-critical occupation among the 24 Chief Financial Officer Act agencies in our 2001 report. HR specialist was also listed as a mission-critical occupation in OPM’s 2003 human capital plan. Mr. Chairman, as you know, longstanding concerns exist regarding DOD’s personnel security clearance program. In fact, we declared DOD’s program a high-risk area in January 2005. We testified last month before this subcommittee on concerns that slow the process of personnel clearances. OPM continues to experience problems with its investigative workforce, a problem we first identified in February 2004 when we found that OPM and DOD together needed approximately 3,800 additional full-time-equivalent investigators to reach their goal of 8,000. Although OPM reports that it has reached its goal, it still faces performance problems due to the inexperience of its domestic investigative workforce. While OPM reports that it is making progress in hiring and training new investigators, the agency notes it will take a couple of years for the investigative workforce to reach desired performance levels. Training. OPM employees cited strengths as well as concerns with employee development and training, as well as not feeling empowered to accomplish their tasks. As we have reported, agencies must develop talent through education, training, and opportunities for growth, such as delegating authorities to the lowest appropriate level. In the 2004 FHCS, 62 percent of OPM employees agreed that “supervisors/team leaders in work unit support employee development” which is close to the agreement level of employees from the rest of government at 65 percent. OPM employees were not as close to the employees in the rest of government in agreeing that “I receive the training I need to perform my job.” Fifty-three percent of OPM employees agreed with this statement as compared with 60 percent of employees from the rest of government. In the follow-up employee focus groups, some participants selected this item as one of the most important issues for OPM to address. Some focus group participants said OPM’s culture does not support training and employees do not have time to attend training classes. Further, an OPM executive told us that it can be a struggle to convince managers that people should attend training. Some focus group participants also said that managers are not given sufficient and timely training budgets. OPM officials believe that limited funding for training is an issue at OPM, and added that OPM is also working to provide managers with more timely training budgets. In 2003, we reported that OPM was using rotational assignments, special projects, and details to broaden the skills of employees. OPM officials also told us the agency is taking steps to address training concerns by offering more online training courses. In 2004, 57 percent of employees agreed with the statement that they have electronic access to learning and training programs readily available at their desk. Although still below the 71 percent agreement level for the rest of government, this was an 8 percentage point increase from the 49 percent of employees who agreed with this statement on the 2002 FHCS. OPM can build upon its current training initiatives, such as online courses and rotational assignments, to leverage the available training resources. Critical resources. OPM employees have indicated concerns regarding the availability of critical resources. Although responses from OPM employees overall are similar to employees from the rest of government, we noted one group of OPM employees whose responses are not as close to their counterparts in the rest of government. Among all OPM employees, 51 percent agreed with the statement that they have “sufficient resources (for example, people, materials, budget) to get my job done” as did 49 percent of employees from the rest of government. For employees performing supervisory functions, however, the agreement rate was 35 percent at OPM and 42 percent for the rest of government. Participants in the follow-up focus groups selected this item as one of the most important issues OPM needs to address to make the agency a better place to work. Focus group participants said the lack of administrative staff and essential equipment causes specialized employees to waste time performing administrative functions. This suggests that OPM needs to take additional steps to ensure that it has aligned its available resources with its mission needs. OPM’s workforce and succession planning efforts may be sufficient for maintaining the organization’s current capacity, but OPM may need more collaborative workforce skills to lead and implement human capital reform. We have reported that strategic workforce planning addresses two critical needs: (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals, and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. Almost half (about 46 percent) of OPM’s workforce will be eligible to retire as of September 30, 2010, as compared with 33 percent governmentwide, according to information in the Central Personnel Data File (CPDF). Further, about two-thirds (66 percent) of the OPM SES employees will be eligible to retire at the same time—about the same as the governmentwide eligibility of 68 percent. We have reported that without careful planning, SES separations pose the threat of an eventual loss in institutional knowledge, expertise, and leadership continuity. In light of the impending retirements among its SES workforce, OPM has engaged in succession planning to ensure that it has the leadership talent in place to effectively manage OPM’s transformation, as well as ensure that the workforce skill mix is appropriate to meet its future challenges and transition to more of a strategic consultant role. This effort is important because leading organizations engage in broad, integrated succession planning efforts that focus on strengthening both current and future organizational capacity. OPM officials told us that the agency has identified 142 key leadership positions within the SES and GS-15 grade levels that are classified for succession planning in the near future. Currently, OPM’s succession planning efforts are only focused on SES and GS-15 positions. I understand that OPM is now planning to expand the scope of its succession management program to include all supervisory, managerial, and executive positions throughout the agency— approximately 240 additional positions. I would encourage them to undertake this broader succession planning effort, given the importance of maintaining, and in many cases augmenting, critical skills throughout the organization, as well as the consideration of the future skills it may need to achieve its own transformation to lead the executive branch’s overall human capital reform effort. As I noted earlier, in 2003, we reported that OPM’s overarching challenge today is to lead agencies in shaping their human capital management systems while also undergoing its own transformation. Given its governmentwide leadership responsibilities, it is particularly important that OPM seeks to “lead by example” with its own human capital practices. Leading organizations go beyond simply backfilling vacancies, and instead focus on strengthening both current and future organizational capacity. Thus, it is critical that OPM assesses its mission-critical workforce skills relative to the human capital reform competencies and needs of the future. OPM officials said they will be issuing the agency’s updated strategic human capital plan later this summer to include such items as its human capital focus, workforce plan, leadership and knowledge management, workforce analysis, and performance goals, among other things. Director Springer has noted that she envisioned the OPM of the future as having a greater emphasis on collaboration and consulting capabilities. Given the greater emphasis on collaboration and consulting skills, I believe that OPM’s forthcoming strategic human capital plan should include thoughtful strategies for how the agency plans to recruit, train, develop, incentivize, and reward employees with this important skill set. During a transformation, we have reported that a communication strategy is especially crucial in the public sector where policy making and program management demand transparency and a full range of stakeholders and interested parties are concerned not only with what results are to be achieved, but also which processes are to be used to achieve those results. Our work on high-performing organizations and successful transformations has shown that communication with customers and stakeholders should be a top priority and is central to forming the partnerships needed to develop and implement an organization’s transformation strategies. Specifically, an appropriate customer communication strategy would include consistency of message and encourage two-way communication. A majority of CHCOs and human resource (HR) directors told us that OPM could improve the clarity, consistency, and timeliness of its guidance to agencies. Several agency officials commented that OPM conveyed a “we’ll know it, when we see it” method of communicating expectations. This method of communicating expectations and lack of clear and timely communications and guidance was clearly illustrated as agencies conveyed their experiences with the SES performance management system certification process. In November 2003, Congress authorized a new performance-based pay system for members of the SES. Under this authority, SES members are to no longer receive automatic annual across- the-board or locality pay adjustments with the new pay system. Agencies are to base pay adjustments for SES members on individual performance and contributions to the agency’s performance by considering such things as the unique skills, qualifications, or competencies of the individual and their significance to the agency’s mission and performance, as well as the individual’s current responsibilities. Congress also authorized agencies to raise the maximum rate of pay for senior executives if their SES performance appraisal system is certified by OPM and OMB as making meaningful distinctions in relative performance. We asked agency CHCOs and HR directors to provide us with their experiences with OPM’s administration of the SES pay-for-performance process to identify parallel successes and challenges that OPM could face in a certification role for the implementation of human capital reforms. We heard a number of concerns from agencies regarding OPM’s ability to communicate expectations, guidance, and deadlines to agencies in a clear and consistent manner. For example, one official said, while OPM tries to point agencies in the right direction, it will not give agencies discrete requirements. This leads to uncertainty about what agencies must and should demonstrate to OPM. Some CHCOs and HR directors also told us that, in some cases, OPM changed expectations and requirements midstream with little notice or explanation. The late issuance of certification submission guidance to agencies was especially problematic for agencies and they appeared to have responded to this circumstance in two different ways. Because OPM did not issue guidance for calendar year 2006 submissions until January 5, 2006, some agencies were unable to deliver their submissions to OPM before the beginning of the calendar year. Further, OPM clarified this guidance in a January 30, 2006, memorandum to agencies, telling agencies that SES performance appraisal systems would not be certified for calendar year 2006 if the performance plans did not hold executives accountable for achieving measurable business outcomes. As a result, agencies had to revise their submissions, where necessary, to meet OPM’s additional requirements. Some agencies indicated that OPM’s late issuance of guidance also creates an uneven playing field among agencies, as those that choose to wait until OPM issues guidance before applying for certification are unable to give their SES members higher pay, while their counterparts who did not wait for OPM’s guidance, could get certified sooner. Some human resource directors we spoke with expressed concern that OPM is not certain about their expectations of agencies’ submissions and said they would like more clarity from OPM on the certification process. For example, one agency director of executive resources said agencies ended up relying on each other rather than OPM during the 2004 SES certification process. They said OPM provided agencies with mixed messages on what would be required for SES certification. One human resource director requested that, at the very least, agencies should be given the certification process guidelines before the end of the calendar year, so they can plan adequately. OPM officials we spoke with about this agreed that they need to be able to provide clear and consistent guidance to agencies and said they are working to improve this. Further, they said their evaluation of agencies’ submissions is evolving as their understanding of the SES certification criteria is increasing. In the past, we have reported concerns with OPM’s communications pertaining to their leadership in implementing governmentwide human capital initiatives and have recommended ways in which OPM could improve its guidance to federal agencies. For example, in 2003 we reported that an initial lack of clarity in telework guidance for federal agencies from OPM led to misleading data being reported on agencies’ telework programs. As one of the lead agencies, along with the General Service Administration (GSA), for the federal government’s telework initiative, OPM issued telework guidance to agencies in 2001 that did not define a statement that was included in their guidance that told agencies that eligible employees who wanted to participate in telework must be allowed that opportunity. As a result, we found that agencies interpreted this statement differently and subsequently reported incomparable data to OPM. After discussing this issue with OPM officials, OPM reacted promptly by issuing new telework guidelines within weeks that addressed our initial concerns. We concluded that the steps taken by OPM in response to our findings showed a ready willingness to address issues that were hindering implementation of this important human capital initiative. We also recommended to OPM and GSA that they should use their lead roles in the federal telework initiative to identify where more information and additional guidance, guidelines, and technical support could assist agencies in their implementation of telework. In May 2006, we reported that communications problems between OPM and DOD may be limiting governmentwide efforts to improve the personnel security clearance—an area of high-risk concern that I noted earlier. For example, DOD officials asserted—and OPM disagreed—that OPM had not officially shared its investigator’s handbook with DOD until recently. DOD adjudicators had raised concerns that without knowing what was required for an investigation by the investigator’s handbook, they could not fully understand how investigations were conducted and effectively use the investigative reports that form the basis for their adjudicative decisions. OPM indicated that it is revising the investigator’s handbook and is obtaining comments from DOD and other customers. More recently, our review of oversight of Equal Employment Opportunity (EEO) related requirements and guidance, found little evidence of OPM coordination with Equal Employment Opportunity Commission (EEOC) because an insufficient understanding of their mutual roles, authority, and responsibilities resulting in lost opportunity to realize consistency, efficiency, and public value in federal EEO and workplace diversity human capital management practice. Further, a majority of human capital and EEO officials responding to a survey we did for that review, reported that OPM’s feedback on their agencies’ programs and the guidance they received from OPM was not useful. Helping to achieve EEO and workplace diversity is another area where opportunities exist for OPM to increase its coordination and collaboration with EEOC. Over 80 percent of the respondents to our survey of federal human capital and EEO officials said that more coordination between OPM and EEOC would benefit their agency, adding that the lack of such coordination resulted in added requirements on them and detracted from the efficiency of their won work. Moreover, in 2005, OMB recommended to OPM that it develop a regular/formal working relationship with EEOC with respect to those programs where it shares oversight responsibility with EEOC in order to improve overall government efficiency. As changes in governmentwide human capital initiatives begin to address the changing needs of the 21st century federal workforce, it will be especially critical that OPM develops clear and timely guidance for agencies that can be consistently and easily implemented. CHCOs and human resource directors informed us that, while OPM’s HCO structure is good in theory, it is often a barrier to obtaining timely technical guidance. Within the HCMLSA division, OPM assigns one HCO as the main point of contact to each agency of the President’s Management Council and one to each cluster of small agencies. HCOs act as liaisons and consultants communicating with an agency’s human capital leadership. CHCOs and human resource directors commented that their HCO has become an advocate for their agencies and has been helpful for troubleshooting and resolving issues that did not require detailed technical assistance. However, problems arose for many agencies when technical questions and issues had to be communicated via their HCO to the policy experts at OPM. For example, one human resource officer told us they asked their HCO if they could talk directly to OPM experts on Voluntary Separation Incentive Pack and Voluntary Early Retirement Authority, but the HCO insisted on relaying the information to the agency. The agency official said their HCO was relatively new, so there were numerous policy nuances that were lost during this process. One CHCO stated that, while the HCOs at OPM have provided one-stop shopping for agencies, having the HCO as the only point of contact can be restrictive. Several human resource directors conveyed instances where technical nuances of a particular issue, such as the Voluntary Early Retirement Authority, were lost in the translation between the HCOs and policy experts at OPM, as the HCO often did not have federal HR experience or expertise. As one official described it, while the HCO is helpful, time and context are lost in having to go through the HCO to obtain technical assistance. Human resource directors expressed a desire to communicate directly with OPM’s policy experts for technical guidance and some use their personal contacts at OPM for technical guidance and assistance instead of going through their HCO. Human resources directors also said that they sometimes received mixed messages on the SES certification process from OPM, and it appeared that answers would change depending on with whom an official was working. From their perspective, agencies thought that OPM did not effectively communicate among its internal divisions and that OPM could greatly improve its customer service by clarifying its internal structure and making it more customer-oriented. Human resource directors commented about the lack of a formal mechanism, such as a survey instrument, to provide feedback to OPM on their guidance and assistance to agencies. We asked an executive within the HCLMSA division about this and were told that while OPM does not have a formal feedback mechanism, they talk to agencies all the time, so OPM does not feel that a formal mechanism is needed. Employee responses to FHCS questions relating to OPM’s customer focus show employees are also concerned about the service OPM provides to agencies. OPM’s results for the two FHCS questions relating to customer focus show a decline from 2002 to 2004 in its employee’s satisfaction with OPM’s focus on customer needs. In 2002, 66 percent of OPM employees agreed that “products and services in their work unit are improved based on customer/public input.” However in 2004, 53 percent of OPM employees agreed with the same statement, a 13 percentage point decline. A similar decline occurred in response to a FHCS question concerning performance rewards. In 2002, 51 percent of OPM employees agreed that “employees are rewarded for providing high quality products and services to customers,” whereas in 2004, 35 percent of OPM employees agreed with the same statement, a decline of 16 percentage points. While the employee focus group discussions did not directly address customer focus, some participants raised concerns during their discussions that could affect OPM’s client focus. Focus group participants from HCLMSA said OPM provides poor service to external customers due to unnecessary delays and a lack of communication. They said the HCO structure makes it difficult to connect customers with OPM employees who can provide them with accurate information and advice. The HCO structure was introduced in 2003, therefore it could have contributed to the decline in positive responses to the customer focus questions in the 2004 FHCS. In an OPM briefing to GAO, officials described OPM’s structure in support of strategic human capital management, and part of that structure includes “targeting capability to implement strategic management of human capital on an agency-by-agency basis” through its HCLSMA division. According to OPM documents, each agency center in HCLMSA has staff to provide human resources technical assistance to agencies. OPM has a number of goals and activities in its Strategic and Operational Plan intended to improve its customer service and focus on customer needs. For example, OPM plans to develop performance standards for OPM common services by July 2006, and implement them by October 2006. As OPM works to address its customer issues, it should consider other ways to more quickly respond to inquires from agencies for specific technical expertise. In addition, OPM should develop a customer feedback survey to identify issues related to timeliness, customer needs, satisfaction, and take action accordingly. Our prior work has found that high-performing organizations strengthen accountability for achieving crosscutting goals by placing greater emphasis on collaboration, interaction, and teamwork, both within and across organizational boundaries, to achieve results that often transcend specific organizational boundaries. In addition, we have found that high- performing organizations strategically use partnerships and that federal agencies must effectively manage and influence relationships with organizations outside of their direct control. An effective strategy for partnerships includes establishing knowledge-sharing networks to share information and best practices. To collaborate and share information, CHCOs said that OPM could make better use of the CHCO Council. Human resource directors said that OPM could facilitate more communities of practice at the implementation level among them. We have reported often on the need to collaborate and share information as a way to improve agency human capital approaches, processes, and systems. Specifically, we have made several recommendations to OPM to work more closely with the CHCO Council to (1) share information on the effective use of retirement flexibilities, (2) act as a clearinghouse of information for the innovative use of alternative service delivery for human capital services, and (3) more fully serve as a clearinghouse in sharing and distributing information about when, where, and how the broad range of human capital flexibilities are being used to help agencies meet their human capital management needs. Further, we have recommended that OPM, in conjunction with the CHCO Council, help facilitate the coordination and sharing of leading practices related to efficient administration of the student loan repayment program by conducting additional forums, sponsoring training sessions, or using other methods. For example, our work on the federal hiring process identified areas where OPM could target its efforts. OPM has since taken a number of actions to help agencies improve their hiring processes. With respect to improving agency hiring processes and use of human capital flexibilities, we reported that the CHCO Council should be a key vehicle for this needed collaboration. For example, OPM, working through the CHCO Council, can serve as a facilitator in the collection and exchange of information about agencies’ effective practices and successful approaches to improved hiring. To address the federal government’s crosscutting strategic human capital challenges, we have testified that an effective and strategic CHCO Council is vital. We have also reported that using interagency councils, such as the Chief Financial Officers’ and Chief Information Officers’ Councils, has emerged as important leadership strategy in both developing policies that are sensitive to information concerns and gaining consensus and consistent follow-through within the executive branch. Agency officials overwhelmingly reinforced a need for OPM to do more to collaborate and facilitate information sharing with the CHCO Council and HR directors. A former department-level CHCO described the CHCO Council as “a lost opportunity with little opportunity for dialogue.” Another CHCO stated that the Council has rarely been used to debate new human capital policies and has been excluded from major policy debates. Although, some CHCOs and HR directors pointed to OPM’s successful collaborative efforts through the CHCO Council, such as its assistance to agencies in the aftermath of Hurricane Katrina, they told us that OPM misses opportunities to more effectively partner with agencies. While some human resource directors believed the CHCO Council did provide a means of sharing information, which is especially useful for the CHCOs who lack human resources backgrounds, several officials described ways in which OPM could more effectively use the Council. A majority of human resource directors we met with told us they would like to see OPM facilitate the sharing of information and best practices among HR professionals, as well as CHCOs. Some officials said that OPM frequently communicates with agencies via fax and e-mail, but does not bring agencies together as often to share information. Some CHCOs said they would like to see the CHCO Council interact more with other governmentwide interagency councils. Also, most HR directors, as well as, several CHCOs, responded positively to more involvement of agency HR directors on the CHCO Council. Director Springer said that membership on the CHCO Council has been expanded to include a deputy CHCO position. The inclusion of deputies is an important step toward building a collegial environment for sharing best practices. Several agency officials used the SES performance management system certification process to illustrate what they considered a missed opportunity for OPM to facilitate agency sharing of information and best practices, particularly during the certification application submission process. However, an OPM official told us that it does not provide agencies with examples of “best practice” certification submissions because OPM does not want to convey to agencies that there is only one “right” way to become certified. While OPM is certainly correct about no one right way, several agencies nevertheless indicated having difficulty understanding OPM’s expectations for agency certification submissions. In response, one CHCO took the initiative to use one of the CHCO Academy meetings to engender information sharing among agencies with the application process. Collaboration and information sharing will be critical as human capital reforms begin to take hold across government. If OPM is to successfully lead reform, it will need to strategically use the partnerships it has available to it, such as the CHCO Council and others, as well as develop a culture of collaboration, information sharing, and working with customers to understand what they will need from the agency. It is clear from the OPM Strategic and Operational Plan, 2006–2010 that issues of customer satisfaction and timeliness in the provision of OPM common services is an important and compelling customer need. OPM management has indicated that operational goals and activities are organized as steps in its internal activities or processes to better support external products and services for its customers and stakeholders. For instance, OPM intends to develop and implement a new common services methodology, to employ performance standards for measuring the delivery of common services to customers, and to operate under a fully implemented set of internal delegated authorities and protocols by the end of fiscal year 2006. OPM management has pointed out that these activities are also presented in a timeline tracking sheet that is used to make “real time” changes through continual update of accomplishments. It is OPM’s intent to then inform customers of the agency’s success in meeting the stated customer goals found in the plan within two weeks of each success, thereby establishing a means of transparency and accountability. OPM officials told us that to date, the agency is meeting this intent. Successful organizations establish a communication strategy that allows for the creation of common expectations and reports on related progress. Activities intended to provide for better means of communication and collaboration are also clearly found in the OPM plan. As noted earlier, OPM is taking steps to improve its internal communication by recently developing and posting a functional organization directory on its internal website. OPM also plans to redesign its public website to improve communication and customer focus by the close of fiscal year 2006. The OPM plan further states, as a strategic objective, that OPM “will have constructive and productive relationships with external stakeholders,” such as Congress, veterans, unions, media and employee advocacy groups. To better meet external client needs, OPM has an ongoing key related effort to modernize its retirement systems program. Through this program, OPM expects to reengineer the various processes that provide services to retirement program participants that include about 5 million federal employees and annuitants. One of OPM’s objectives is to standardize applications for coverage and eligibility determinations and benefits calculations, making them specific to customer needs and accessible to federal agencies and program participants. OPM’s Strategic and Operational Plan contains operational goals related to this modernization effort. We believe that such a modernization effort is clearly needed. At the same time, as we have noted in our prior work, OPM has lacked needed processes for developing and managing requirements and related risks, while providing sound information to investment decision makers in order to effectively complete modernization of this program. We made recommendations to OPM regarding establishment of management processes needed for effective oversight of the program. OPM agreed that the processes we identified were essential and noted it is taking steps to address our recommendations to strengthen these processes. Leading organizations have recognized that a critical success factor in fostering a results-oriented culture is an effective performance management system that creates a “line of sight” showing how unit and individual performance can contribute to overall organizational goals and helping them understand the connection between their daily activities and the organization’s success. Effective performance management systems can drive organizational transformation by encouraging individuals to focus on their roles and responsibilities to help achieve organizational outcomes. Our analysis shows that OPM’s executive performance management system aligns the performance expectations of OPM’s top leaders with the organization’s goals. OPM sets forth the organization’s goals in its 2006–2010, Strategic and Operational Plan and directly connects these goals to the performance expectations of top leaders using performance contracts. Clearly defined organizational goals are the first step toward developing an effective performance management system. OPM uses performance contracts to link organizational goals to performance expectations for senior leaders and holds them accountable for achieving results. As we have reported, high performing organizations understand that they need senior leaders who are held accountable for results, drive continuous improvement, and stimulate and support efforts to integrate human capital approaches with organizational goals and related transformation issues. These organizations can show how the products and services they deliver contribute to results by aligning performance expectations of top leadership with organizational goals and then cascading those expectations down to lower levels. We assessed how well OPM is creating linkages between executive performance and organizational success by reviewing the performance contracts (Fiscal Year 2006 Executive Performance Agreements) of the five associate directors of OPM’s major divisions. We evaluated these performance contracts by applying selected key practices we have previously identified for effective performance management. We chose these practices because they are especially relevant to OPM’s current strategic management efforts. These practices, collectively with others we have identified in prior work, create a “line of sight” showing how unit and individual performance can contribute to overall organizational goals. We found that OPM has implemented several key practices to develop an effective performance management system for its senior executives: Align individual performance expectations with organizational goals. An explicit alignment of daily activities with broader results is one of the defining features of effective performance management systems in high-performing organizations. OPM executive performance contracts explicitly link individual performance commitments with organizational goals. Executives are evaluated on their success toward achieving goals that are drawn directly from the OPM Strategic and Operational Plan. Measures of these achievements account for 75 percent of executives’ annual performance ratings. For example, one associate director’s performance contract includes a commitment to achieve OPM’s operational goal of having “80 percent of initial clearance investigations completed within 90 days.” Connect performance expectations to crosscutting goals. High- performing organizations use their performance management systems to strengthen accountability for results, specifically by placing greater emphasis on collaboration to achieve results. OPM’s executive performance contracts achieve this objective by making executives accountable for OPM-wide goals. In addition to specific divisional goals, each executive performance contract includes a common set of “corporate commitments” that transcend specific organizational boundaries and that executives must work together to achieve. These commitments are directly linked to the OPM Strategic and Operational Plan. For example, each executive contract includes a commitment to “Implement an employee recognition program at OPM by July 1, 2006.” Provide and routinely use performance information to track organizational priorities. High-performing organizations provide objective performance information to executives to show progress in achieving organizational results and other priorities. OPM is taking a tactical approach to implementing its Strategic and Operational Plan. Activities supporting the strategic objectives are listed on an “Operational Timeline” or tracking sheet that OPM uses, and “real time” changes are made through continual updates of accomplishments. According to Director Springer, each OPM division has a tracking sheet for the specific goals for which they are accountable. She told us that OPM leadership meets monthly to review the timeline and to determine if goals have been met or what progress OPM is making toward achieving their objectives. Require follow-up actions to address organizational priorities. High-performing organizations require individuals to take follow-up actions based on the performance information available to them. OPM’s performance contracts include commitments for executives to respond to results from the FHCS. Each associate director is committed to “Implement action plan to ensure OPM is rated in the top 50% of agencies surveyed in the 2006 FHCS and the top five agencies in the 2008 FHCS.” To achieve this goal, each associate director developed a FHCS action plan for their division to address employee concerns identified in the 2004 FHCS and the follow-up focus group discussions. Use competencies to provide a fuller assessment of performance. High-performing organizations use competencies, which define the skills and supporting behaviors that individuals need to effectively contribute to organizational results. Each OPM executive performance contract includes core competency requirements for effective executive leadership, which account for 25 percent of annual performance ratings. For example, executives are responsible for building “trust and cooperative working relationships both within and outside the organization.” OPM’s executive performance contracts incorporate these key practices of performance management, and the agency must build on this progress and ensure that its SES performance management system is used to drive organizational performance. OPM can build on its strong system of executive accountability to address employee concerns with its overall performance culture, as well as support its internal transformation. OPM has plans to implement new performance elements and standards for all OPM employees to support the new agency Strategic and Operational Plan. As we have reported, high-performing organizations use their performance management systems to strengthen accountability for results. In the 2004 FHCS, the percent of OPM employees who agreed that “I am held accountable for achieving results” was 81 percent; essentially the same as the 80 percent of employees in the rest of the government agreeing with this statement. OPM employees’ positive view of “being held accountable for achieving results” can be used to help address employee concerns regarding its performance culture. For example, a significant decrease occurred between OPM’s 2002 and 2004 FHCS results on a question that measures employee perceptions of management’s focus on organizational goals. The percentage of OPM employees who agreed that “managers review and evaluate the organization's progress toward meeting its goals and objectives,” declined by 17 percentage points from 2002 (69 percent) to 2004 (52 percent). This question was only discussed in a few of the focus groups, so it is unclear why fewer employees agreed with this statement in 2004. Although limited, these discussions suggest that some employees do not feel their performance appraisal is a fair reflection of their performance due to inadequate goals and performance standards, and a lack of alignment between employee goals and OPM’s mission. OPM plans to address these employee performance concerns to ensure there is a clear linkage between the OPM Strategic Operational Plan, Division/Office Plans, and individual employee-level work plans. By July 2006, OPM plans to implement new performance elements and standards for all employees that support the OPM Strategic and Operational Plan. Already underway, is an OPM beta site (the HCLMSA division) to test its performance management system to link pay to performance. OPM officials informed us that as of June 1, 2006, all HCLMSA employees are now working under new performance plans, consistent with the OPM beta site requirements. To maximize the effectiveness of a performance management system, high performing organizations recognize that they must conduct frequent training for staff members at all levels of the organization. OPM plans to develop and implement a core curriculum for supervisory training to ensure all managers and supervisors are trained in performance management. Also, OPM is developing a proposal to enhance the relationships between the human resources function and managers to assist them in dealing with their human resource issues. If effectively implemented, these actions should address many of the concerns raised by focus group participants. OPM faces many challenges as it seeks to achieve its organizational transformation and become a high-performing organization. To meet its current and future challenge to lead human capital across government, Director Springer has shown leadership commitment to its transformation by initiating a number of action plans to address employee concerns. While the steps taken by OPM demonstrate progress in achieving its transformation, it must continue on this path by closely monitoring and communicating with its employees and customers, expanding its workforce and succession planning efforts, and continuing to improve its performance culture and accountability for results. As I have testified on many occasions, in recent years GAO has learned a great deal about the challenges and opportunities that characterize organizational transformation. From both our own experiences and from reviewing others’ efforts, I look forward to working closely with Director Springer and assisting Congress as it moves toward the implementation of governmentwide human capital reform. Chairman Voinovich, Senator Akaka, and Members of the subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. For further information regarding this statement, please contact Brenda S. Farrell, Acting Director, Strategic Issues, at (202) 512-6806 or [email protected]. Individuals making key contributions to this statement include Julie Atkins, Thomas Beall, Carole Cimitile, William Colvin, S. Mike Davis, Charlene Johnson, Trina Lewis, and Katherine H. Walker. We used the Federal Human Capital Survey (FHCS) and summaries of the Office of Personnel Management (OPM) focus groups to assess employee views of OPM’s organizational capacity. OPM conducted the FHCS during fall 2004. The survey sample included 276,000 employees and was designed to be representative of the federal workforce. OPM had 1,539 respondents to the survey. The survey included 88 items that measured federal employee perceptions about how effectively agencies are managing their workforces. For more information about the 2004 FHCS survey see http://www.fhcs2004.opm.gov/. We reviewed OPM’s analysis of its 2004 FHCS results and conducted our own analyses of survey results using 2002 and 2004 FHCS datasets provided to us by OPM. On the basis of our examination of the data and discussions with OPM officials concerning survey design, administration and processing, we determined that the data were sufficiently reliable for the purpose of our review. In fall 2005, OPM contracted with Human Technology, Inc. to conduct focus groups to understand factors contributing to employees’ responses on selected items from the 2004 FHCS and to obtain employees’ ideas for addressing top priority improvement areas. Employees were randomly selected to participate in 33 focus groups with participants from all major divisions, headquarters and the field, employees and supervisors, and major OPM installations. The participants in each focus group decided which topics to discuss by voting for the FHCS questions that “are most important for OPM to address in order to make the agency a better place to work.” Questions were divided into three categories: leadership, performance culture, and other dimensions. Participants voted for three questions in each category and the questions that received the most votes were discussed by the group. We analyzed summaries of these focus groups and used the participant comments to illustrate employee perspectives. We also analyzed recently issued action plans developed by OPM to address issues identified in the focus groups. These action plans were approved by OPM’s Director in May 2006 and they list specific actions OPM and each internal division will take along with suggested due dates for completion. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
General recognition exists of a need to continue to develop a governmentwide framework for human capital reform to enhance performance, ensure accountability and position the nation for the future. Potential governmentwide human capital reform and likely requirements that the Office of Personnel Management (OPM) assist, guide, and ultimately certify agencies' readiness to implement reforms, raise important questions about OPM's capacity to successfully fulfill its central role. This testimony addresses management challenges that could affect OPM's ability to lead governmentwide human capital reform efforts. To assess these challenges, GAO analyzed OPM's 2002 and 2004 Federal Human Capital Survey (FHCS) results, data from its 2005 follow-up focus group discussions, OPM's May 2006 action plans to address employee concerns, and OPM's associate directors' fiscal year 2006 executive performance contracts. GAO also conducted interviews with OPM senior officials and Chief Human Capital Officers (CHCO) and human resource directors from CHCO Council agencies. In commenting on a draft of this statement, the OPM Director said that OPM has addressed many of the challenges highlighted from the 2004 FHCS and achieved many meaningful and important results. GAO agrees and believes OPM should continue to build upon its progress to date. OPM has made commendable efforts towards transforming itself to being a more effective leader of governmentwide human capital reform. It can build upon that progress by addressing challenges that remain in the following areas: Leadership. OPM Federal Human Capital Survey responses and the fall 2005 follow-up focus group discussions suggests that information from OPM leadership does not cascade effectively throughout the organization and that many employees do not feel senior leaders generate a high level of motivation and commitment in the workforce. Agreement with leaders ability was lowest in one of OPM's key divisions--a unit vital to successful human capital reform. OPM is working to address employee concerns and improve perceptions of senior leaders. Talent and resources. To align talent and resources to support its reform role, OPM has made progress in assessing current workforce needs and developing leadership succession plans. However, OPM's workforce planning has not sufficiently identified future skills and competencies that may be necessary to fulfill its role in human capital reform. Customer focus, communication, and collaboration. OPM can improve its customer service to agencies and create more opportunities for dialogue. According to key officials in executive agencies, OPM guidance to agencies is not always clear and timely, OPM's human capital officer structure is often a barrier to efficient customer response, and greater opportunities exist to collaborate with agency leaders. OPM recognizes these shortcomings and has identified improvement actions to address. However, more can be done such as strategically using partnerships it has available to it, like the CHCO Council. Performance culture and accountability. OPM has made progress in creating a "line of sight" or alignment and accountability across Senior Executive Service (SES) expectations and organizational goals. It needs to build on this progress and effectively implement new performance standards for all employees to support the recently issued agency strategic and operational plan and ensure all employees receive the necessary training. To meet OPM's current and future challenge to lead governmentwide human capital reform, Director Springer has shown leadership commitment to OPM's transformation by initiating a number of action plans to address employee concerns. While the steps taken by OPM demonstrate progress in achieving its transformation, it must continue on this path by closely monitoring and communicating with its employees and customers, expanding its workforce and succession planning efforts, and continuing to create a "line of sight" throughout the organization.
|
DOD’s housing management manual states that military-owned, -leased, or -sponsored housing may be budgeted to meet long-range requirements in areas where the local community cannot support the housing needs of military members. Military housing may also be required if available housing in the community has been determined to be unacceptable or if personnel must reside on the installation for reasons of military necessity.Each service is responsible for determining family housing requirements. In general terms, the services should determine their on-base housing requirements based on the number of military families at an installation that are seeking housing, minus the affordable and acceptable supply of existing rental housing units available to the military in the private sector. The supply of private sector housing should be calculated through a detailed housing market analysis and should include a count of available houses in the private sector based on the housing allowances for each pay grade, considering family size. An installation has a housing deficit if a greater number of personnel are seeking housing than the private sector can support. Conversely, a surplus of on-base housing occurs if the private sector housing supply is greater than the number of families seeking housing. DOD has acknowledged the need for further reductions and the streamlining of its infrastructure. In the most recent Annual Defense Report, the Secretary of Defense stated that the Department continues to seek congressional approval for additional rounds of base realignments and closures. By eliminating excess infrastructure and consolidating its forces at fewer bases, the Department believes it will be able to spend its resources on forces and equipment critical to its modernization effort. As part of our ongoing Performance and Accountability Series, we reported in January of this year that infrastructure costs continue to consume large portions of DOD’s budget. Our recent analysis of DOD’s Future Years Defense Program documents for fiscal years 2001-2005 showed that the proportion of resources devoted to direct infrastructure relative to mission has not changed, despite expectations that it would decrease. After years of effort, DOD has not yet implemented a DOD-wide process for determining requirements for family housing on its installations. As a result, the Department cannot know with assurance how many housing units it needs and where it needs them and may be investing in infrastructure it no longer needs. The Department has worked to develop the framework for a process to determine family housing needs that requires reliance on the private sector first to house its servicemembers. However, it has not adopted the process because of a lack of consensus across DOD on common standards such as the definition of affordable housing and acceptable commuting distances. Moreover, a recent study by the Center for Naval Analyses indicates that the services seem to be protecting their existing family housing infrastructures because of concerns about a potential loss of military community. Over the past several years, the Congress, GAO, and the DOD Inspector General have been critical of the inconsistent methodologies used by the services to determine the availability of housing for military families in private sector areas surrounding military installations. In September 1996, we found DOD had not maximized the use of private sector housing because, among other reasons, the housing requirements analyses often underestimated the ability of the private sector to meet housing needs. The Department’s Inspector General recommended in a 1997 report that DOD develop a Department-wide standard process and standard procedures to determine family housing requirements. Further, the Inspector General cautioned that the Department and the Congress did not have sufficient assurances that requests for funds for housing construction on military installations addressed the services’ actual needs in a consistent and valid manner (see fig. 1 for a chronology of selected reports concerning military family housing). Appendix I provides a summary of recent reports concerning the military family housing program. “The Department continues to work on the development of a single model for determining the government-owned housing needs using a set of standard DOD-wide factors along with flexible variables that accommodate service differences. This model will help DOD determine the number of government-owned housing units that need to be constructed or maintained as well as determine the size of the Department’s housing privatization projects.” DOD and the services have worked to develop the framework for a single, consistent process for determining housing requirements. The proposed framework would require the military services to conduct a market analysis surrounding each installation to determine the amount of adequate, affordable housing the private sector could provide. Once this was determined, available housing would be compared to military personnel needing housing and the difference would be the military housing requirement. According to Department housing officials, the proposed process would provide the services latitude in applying service-specific criteria and military judgment in developing housing requirements. For example, the requirement could be adjusted for the retention of housing for key and essential personnel, a percentage of personnel in each pay grade, and for the retention of historic housing. According to DOD housing officials, each of these factors would usually have a relatively small impact on the requirement. In our view, some flexibility in the process is warranted because of the differences in private sector housing around each installation, but DOD must carefully monitor the services use of this flexibility to ensure that they adhere to Department policy to use the private sector first for housing their service families. While DOD has worked to develop the framework for a consistent process, Department housing officials stated that several issues remain unresolved. Issues such as what constitutes affordable civilian housing and reasonable commuting distances have slowed the adoption of the process. For example, the Air Force recently reduced the acceptable commuting distance from the 60-minute standard used by the other services to a 30-minute standard. According to a recent Center for Naval Analyses report, the services will need to agree on each element of the new requirements procedure before it can be finalized. The report further stated that the Office of the Secretary of Defense must obtain agreement among the services or be forced to impose the standards. Department housing officials stated that once a new process is in place, it will take years to update the housing requirements DOD-wide, since the detailed market analyses must be performed base by base. This is of concern, because the Department risks investing valuable resources in housing that it does not need. In late 1999 and 2000, each of the military services submitted Military Family Housing Master Plans to Congress that document deficits in military housing. These plans indicate that, DOD-wide, the services want about 12 percent more military housing units than they have. In addition, the plans show that about two-thirds of the approximately 285,000 aging government-owned houses are in inadequate condition. The housing plans show that the services plan to address inadequate and deficit family housing through a combination of military construction and privatization initiatives. About 3 percent of family housing units were deemed surplus. (See fig. 2 for a status of military family housing units for each service.) The DOD Inspector General and GAO have previously reported that the services use inaccurate housing market analyses when determining the need for military housing. According to a July 1996 Inspector General report, the requirements for seven military family housing projects at a Marine Corps base were unsupported because the number of needed family housing units was unknown. The report recommended that all of these construction projects be placed on hold and that the Marine Corps perform a new housing analysis to justify the family housing construction projects. Although management concurred with the recommendations, the Marine Corps proceeded with two of the projects. We reported in 1996 that according to Army and Air Force information, many military installations in the United States had not maximized the use of private sector housing to meet military family housing needs. For example, the Army’s housing requirements model estimated that 844 of Fort Eustis’ 1,330 family housing units were surplus. If the model had matched housing requirements against adequate private sector housing before matching them against government housing, the model would have estimated that 1,170 of these units were surplus. The Department still does not maximize the use of private sector housing. As part of its effort to develop a standard requirements-setting process, DOD asked a contractor to perform housing market analyses at selected installations. We reviewed the results of three of these market analyses. Two of the three installations were projected to have substantial surpluses once the private sector’s ability to provide housing was factored in. Based on these analyses, over half (1,599 of 3,039) of the military houses at these installations would be surplus. According to DOD housing officials, the third base—a remote, rural installation—had a modest shortage of military housing units. Surplus military housing is the nearly inevitable result if the Department starts by setting housing requirements based on the availability of private sector housing for its members. Surplus housing identified by the proposed process will be disposed of at the end of its useful life, according to DOD housing officials. During the 5-year transition period, the housing officials said the Department would avoid investments in surplus housing units, but admitted that this would be difficult to do without firm requirements. Demand for military housing—evidenced by long waiting lists and high occupancy rates—could be seen as evidence that military housing is needed and that DOD does not have surplus family housing. However, as we have previously reported, waiting lists can be misleading because many personnel on them do not accept military housing when offered because they have already found suitable civilian housing while waiting. One service’s policy is to use occupancy rates to adjust the requirements- setting process: for example, if an installation’s family housing is filled to capacity, all of it must be needed. This rationale is not consistent with DOD’s stated policy of relying on the private sector first. The services— through their referral offices—guide military families to find housing and thus control occupancy. Essentially, the referral offices offer military families a choice between free military housing or an allowance for private sector housing that generally does not cover the total cost of rent and utilities. However, the planned increases in the housing allowance will gradually remove the financial disincentive associated with civilian housing and should make living off base more attractive. Although the change in the housing allowance program is likely to decrease the demand for military housing relative to civilian housing, there are indications that the services are reluctant to reduce on-base family housing. DOD has recognized the concerns among service leaders that housing military personnel off installations in civilian housing would weaken the sense of military community. However, as we said in our May 2001 report, personnel live in military housing primarily because it is free and they seek to avoid additional out-of-pocket costs associated with living in civilian housing. According to a recent Rand report, members in focus groups “scoffed” at the notion that living in military housing helped them to do a better job. And only about 2 percent of servicemembers selected “like having military neighbors” as the first or second most important factor in the decision to live in military housing. Rand concluded that most military members simply do not see a compelling reason—beyond the economic benefit—to live on base. After meeting with each of the services to discuss the methodology for determining housing requirements, the Center for Naval Analyses concluded that a primary goal of the services seemed to be to protect their current family housing inventories. The services were concerned about how any change in procedure would affect the number of on base family housing units. The Center reported that the services want to retain their current military housing, regardless of the new requirements-setting process. Reasons for this include the prospect of large amounts of surplus housing, and concerns about possible morale problems resulting from personnel being forced to move into private sector housing. The Center’s report concluded that increased service resistance to accept a procedural change that may reduce the number of housing units has delayed the completion of formal DOD guidance. The increase in housing allowances has several advantages but makes the need for a DOD-wide requirements-setting process more urgent. The Department could more readily implement its policy to rely first on the private sector to house service families because the additional out-of- pocket costs would be eliminated by the increased housing allowance. Thus, the demand for civilian housing is likely to increase, while the demand for military housing should decrease. While costs for the increased housing allowance appear substantial in the short term, evidence shows that it is cheaper for the government to provide an allowance for private sector housing than to provide a military house on base. Until the Department sets accurate housing requirements DOD-wide, however, it could face mounting costs to maintain its aging and in some places unnecessary housing infrastructure. The housing allowance increase should allow DOD to better satisfy the preferences of servicemembers. We have previously reported that, based on the results of DOD’s 1999 Active Duty Survey, military members prefer civilian housing if costs are equal. Of those currently receiving a housing allowance or living in military housing, about 72 percent said they would prefer civilian housing if costs were equal, while 28 percent said they would prefer military housing. In its 1999 report, Rand reported that only about 20 percent of military members prefer military housing, and that the predominant reason servicemembers live in military housing is for the economic benefit. Department officials also believe the housing allowance increase will ultimately change the composition of the population in military housing. Rand’s analysis indicates that demographic characteristics are the main factor in the demand for military housing. Those who prefer military housing include lower income personnel (especially junior enlisted personnel), those with spouses who do not work outside the home, and those with a greater number of children. Military members with larger families tend to be entitled to a larger residence in military housing than they would be able to afford on the civilian market (housing allowances increase by pay grade). Regardless of whether DOD fully implements a private sector first policy, the increase in housing allowance will add substantial costs to the housing program in the near term. By 2005, the Department projects total costs to be $12.8 billion, about 34 percent more than the $9.6 billion for fiscal year 2000 (see fig. 3). The amount allocated to the housing allowance program will grow from $6 billion in fiscal year 2000 to over $8.8 billion in 2005, about a $2.8 billion increase. The amount allocated for military family housing is expected to grow from $3.5 billion in 2000 to about $4 billion in 2005. Considerable evidence suggests that providing a housing allowance is less expensive and more flexible than providing a military house. In 1993, the Congressional Budget Office estimated that DOD saved about $3,800 per family by paying a housing allowance versus providing military housing.In our 1996 report, we estimated that the military saved almost $5,000 per unit by paying a housing allowance. In its 1999 report, Rand said that all 12 installations they visited had paid more to provide military housing— from $3,000 to $10,000 per unit. Increasing the housing allowance will somewhat narrow the savings that will result from putting personnel in private sector housing instead of family housing on base. Admittedly, these estimates are very rough and are not based on life-cycle costs. However, DOD officials told us that they do not compute life-cycle costs nor do they capture all overhead and other costs associated with military housing, since they are absorbed in many places in the DOD budget. For example, military housing has other significant costs associated with it, including the associated infrastructure like schools, childcare, recreational facilities, and other amenities on installations. Thus, DOD budget officials told us that current funding figures tend to understate the cost of military housing. While these cost estimates are imprecise, it seems unlikely that the government can provide housing cheaper than the private sector, which is driven by market forces. Moreover, DOD housing officials told us that maintaining family housing is not a core mission for the military services and that family housing has been under-funded for many years. This, in their view, is the reason why so much of the family housing stock is inadequate today. As the housing allowance increase is phased in—eliminating the financial disincentive to living in civilian housing—demand for military housing is likely to decrease. This decrease in demand for military housing reinforces the need to implement a consistent housing requirements-setting process quickly so that the Department of Defense and the Congress are assured that the housing construction and privatization projects they review are essential. Unless the Department can accurately determine the housing it needs on its installations, it may spend funds for housing it does not, and will not, need. We recommend that you expedite the implementation of a consistent DOD-wide process for establishing military housing requirements, ensuring that the Department does not spend money on housing it does not need. Specifically, we recommend you demonstrate the need for new construction, renovation, or privatization projects using a process that consistently and adequately considers the availability of civilian housing, before submitting requests for funds for the projects to the Congress. Under 31 U.S.C. 720, you are required to submit a written statement of the actions taken on our recommendations to the House Committee on Government Reform and to the Senate Committee on Governmental Affairs not later than 60 days from the date of this report and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. We provided a draft of this report to the Office of the Deputy Under Secretary of Defense for Installations and Environment for comment. The Deputy Under Secretary generally concurred with our conclusions and recommendations. The Department and the military services have agreed that a single, consistent method for determining military housing requirements is needed. The Deputy Under Secretary noted that the Department has spent a great deal of time and effort developing a process that would implement DOD’s long-standing policy of relying on the civilian sector, but that significant issues still need resolution. He cited concerns that a change in the housing requirements process could result in divestiture of thousands of homes before the housing allowance increase is fully phased in by 2005, but noted that this is mitigated because the requirements-setting process under consideration projects private-sector housing availability out 5 years. He indicated that the Department recognizes some demand for on-base housing, but to include an on-base housing demand factor in the housing requirements process would inevitably require DOD to reverse or at least decrease its reliance on the private sector. Rather, the Department’s housing inventory must be validated through an auditable process that can project the extent to which the private-sector housing around military installations can support military families. We agree that considering demand for on-base military housing would, in effect, reverse DOD’s long-standing policy to rely on the private sector first and should therefore be avoided. The Deputy Under Secretary partially concurred with our recommendation that the Department demonstrate the need for new construction, renovation, or privatization projects using a process that consistently and adequately considers the availability of civilian housing, before submitting the requests for funds for the projects to Congress. While recognizing that funding the retention or construction of unneeded housing diverts resources from other DOD priorities, he noted that the current amount of inadequate housing argues for continuing military construction investment while the requirements-setting process is finalized. We agree that some military construction may be needed in locations where the private sector cannot support the housing need, but the Department should carefully review projects to ensure that the private sector cannot meet the housing need before requesting funds from Congress. In our view, these long-standing requirement-setting weaknesses need to be addressed now. Otherwise, DOD risks spending millions on infrastructure that it does not, or will not, need. To determine whether DOD has implemented a standard process for determining the need for military housing based on available private sector housing, we held discussions with, and reviewed documents from, DOD housing officials about the Department’s efforts to develop such a process. We reviewed numerous past reports, including but not limited to, those from GAO, the Department of Defense Inspector General, and the Center for Naval Analyses documenting problems with the current processes used to establish military housing requirements, and obstacles that must be overcome to implement a standard Department-wide process. To assess how the housing allowance increase will affect the need for housing on military installations over the long term, we held discussions with, and reviewed documents from, DOD officials of the Under Secretary of Defense for the Comptroller; the Deputy Under Secretary of Defense for Installations and Environment; and the Under Secretary of Defense for Personnel and Readiness. We relied on data from past GAO and Rand reports. We performed our work from January 2001 through June 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to appropriate congressional committees. We will make copies available to others upon request. The report will also be available at http://www.gao.gov. Please contact me at (202) 512-5559 or William Beusse, Assistant Director, at (202) 512-3517 if you have any questions concerning this report. Major contributors to this report were Jack Edwards, John Pendleton, and Matthew Ullengren. Recently, several organizations have reported on the military family housing program. The Congress, GAO, and the Department of Defense (DOD) Inspector General have identified problems with the military services’ methodologies for developing housing requirements. Some have recommended that the Department develop and implement a more consistent requirements process. Table 1 provides a summary of the current problems and recommendations that were made to the DOD to improve its requirements.
|
This report reviews the Department of Defense's (DOD) family housing program. GAO discusses (1) whether DOD has implemented a standard process for determining the required military housing based on housing available in the private sector and (2) how an increase in the housing allowance is likely to affect the need for housing on military installations over the long term. Despite calls from Congress, GAO, and DOD's Inspector General, DOD has not introduced a standard process for determining military housing requirements. DOD and the services have worked to develop the framework for the process, but technical concerns, such as standards for affordable housing and commuting distance, have stalled its adoption. Increasing the housing allowance underscores the urgent need for a consistent process to determine military housing requirements because it is expected to increase demand for civilian housing and lessen the demand for military housing. From a policy standpoint, increasing the allowance better positions DOD to rely on the private sector first for housing because it removes the financial disincentive to living in civilian housing. From a management standpoint, considerable evidence suggests that it is less expensive to provide allowances for military personnel to live on the civilian market than to provide military housing. Although overall program costs are increasing significantly in the short term to cover increased allowances, DOD could save money in the longer term by encouraging more personnel to move into civilian housing.
|
The standard of living of the elderly depends on total retirement income,which includes Social Security, pensions, income from assets, and earnings from employment. In addition, benefits from public assistance programs, such as Supplemental Security Income (SSI), and health insurance programs, such as Medicare, may also be relevant in assessing the standard of living of the elderly. Pensions generally supplement Social Security, which has a progressive benefit structure that provides higher relative benefits to lower earners. As a result, although private pensions account for only about one-tenth of the aggregate income of the elderly, they are an important source of retirement income for many households, particularly those in the middle to higher ranges of the income distribution. Recent research suggests that about two-thirds of households nearing retirement have rights to some pension income, but these amounts can vary widely. The ability to earn and receive retirement income under a voluntary, private pension system is the result of decisions made by both the employer and the worker within a legal and regulatory framework that has developed over time. The Internal Revenue Code and the Employee Retirement Income Security Act (ERISA) of 1974, as amended, are the basis of pension law today. Employers make the decision to sponsor a plan and choose the features that it will include, taking into account that workers may have different preferences for pensions in comparison with other forms of compensation such as cash wages and health insurance. Workers also make numerous employment-related decisions over the course of their career, such as where to work, how much to work, and whether to change jobs, that can affect their ability to earn pension income. They also make decisions about how much to save for retirement and whether to preserve funds distributed from their plans. The result of employer and worker interactions in the marketplace is that not all workers will earn pension income and receive it in retirement. Among the most important reasons that employers sponsor pensions are the need to attract and retain a productive workforce and the tax advantages associated with pensions. Pensions can be a means of providing deferred compensation that may encourage workers to make a long-term commitment to the employer, thus reducing turnover and making for a more stable, productive workforce. But in deciding whether to offer a pension, companies must assess the nature of their particular workforce to determine if offering pensions is a necessary employment inducement. For example, some workers may view pensions as less important than cash wages or other benefits, particularly health insurance. For such workers, the employer may have little incentive to offer a pension. Employers also choose to sponsor pension plans because of the favorable federal tax treatment of pension contributions and investment returns. This tax treatment, particularly the deferral of taxation on invested income, is especially attractive to those facing higher marginal tax rates, such as some business owners and higher-paid employees, and can be an important incentive to sponsor a plan. Employers also consider the benefits of offering a pension plan in comparison with its overall cost. The major cost of the pension to the employer will depend on the contributions necessary to finance or fund the pension. Other costs involve the administration of the plan, such as record keeping, calculation of benefits, outside administrative help and advisers, communication with employees, investment management fees, and compliance with government rules and regulations. The result of weighing the benefits and costs of offering a plan is that not all employers will find it desirable to sponsor a pension plan. For example, compared with medium and large employers, small employers are less likely to sponsor a pension plan. Small businesses may face greater uncertainty, especially with regard to profitability, and may face cost pressures that can affect their ability to offer compensation packages that compare favorably with those offered by larger, more stable firms. While small businesses often cite the cost of pensions as an obstacle to sponsorship, surveys suggest that the firm’s lack of profitability and employee preferences are also important obstacles. An employer has discretion to determine which workers will be covered by its pension plan, and the employer’s plan design decision may result in certain types of workers’ not having the opportunity to participate. In designing the plan, the employer may cover employee groups on the basis of objective business criteria, such as pay (hourly or salaried), job location, or job categories. An employer may have one plan to cover a wide range of categories of workers, or it may have separate plans for different groups depending on business objectives. The employer is also bound by a federal rule on eligibility that covers all pension plans. Under this rule, a pension plan may exclude employees younger than age 21 or those who have less than one year of service from participating in the plan. Plans that seek tax-qualified status must also satisfy a set of “nondiscrimination” rules that seek to ensure that the plan design does not exceed certain limits in the extent to which it favors highly compensated employees in participation and benefits. Even so, in addition to age and service requirements, the nondiscrimination rules may permit a firm to exclude between 30 to 80 percent of the non–highly compensated workers from the plan. A worker can be offered either a defined benefit plan or a defined contribution plan. Some workers may participate in both types of plans if their employer offers more than one type of plan. Figure 1 shows that defined contribution plans account for most of the growth in pension plan participation since the mid-1970s. Under the typical formula used for defined benefit plans, the annual (or periodic) increment in benefits earned (benefit accrual) tends to increase over the worker’s career with the employer, which makes this type of plan advantageous for workers who stay with one employer over their working careers. Under a defined contribution plan, the benefit accumulation each period may fluctuate over the course of the worker’s career; frequently, however, such accounts are depicted in terms of an average or steady return over the worker’s tenure with the employer, making the accumulation pattern more even in comparison with a defined benefit plan. This means that younger or shorter-tenured workers may have higher benefit accumulations compared with the benefits they would accrue under a traditional defined benefit plan. Employers make other decisions about how pension benefits will accrue and be distributed. These decisions are subject to legal requirements. ERISA sets limits on annual contributions and benefits that qualified retirement plans may provide for each participant. These requirements are generally intended to limit the tax benefits provided through pensions, particularly to highly compensated individuals. Separate limits exist for defined benefit and defined contribution plans. In addition, employers must ensure that their plans comply with nondiscrimination rules that seek to balance benefit accruals of highly paid participants with those of non–highly paid participants by specifying the extent to which the benefit accruals of, or contributions made for, highly paid workers can exceed those of non–highly paid workers. Vesting provisions specify when workers acquire the irrevocable right to pension benefits. ERISA requires a plan to adopt vesting standards at least as liberal as one of the following schedules: full (or “cliff”) vesting after five years or gradual vesting over seven years, except that matching contributions must fully vest within three years or gradually vest over five years. These rules affect how and when pension benefits will be paid out to workers. Pension plans provide for distribution of accrued benefits in the event of the worker’s retirement, death, disability, or other severance of employment. Present law limits the circumstances under which plan participants may obtain preretirement distributions. Defined benefit plans typically provide benefits in the form of an annuity, which provides benefits throughout the period of retirement, and generally have age and service provisions that determine when an employee becomes eligible for receipt of benefits. Employers may also allow their workers to elect to receive pension payments as a lump sum. Because defined contribution plans are not required to offer annuities, lump sum distributions are typical and raise concerns about whether pension benefits will be preserved throughout retirement. The decisions that workers make also play an important role in determining how much pension income they will earn and receive in retirement. When a worker accepts employment, he or she accepts a compensation package that may or may not include a pension. Many workers may prefer cash wages or other benefits, such as health insurance, to pension benefits. The extent to which the worker values the pension component of compensation depends on many individual factors, including how aware he or she is about the need for future retirement income. Some workers also decide how long to remain employed by the plan’s sponsor, and this decision determines whether they will earn pension income. Workers who stay with a plan sponsor for a number of years are more likely to meet the vesting requirements and to accrue benefits. For some plans, such as 401(k) plans, workers must also decide to participate, how much to contribute, and how to invest the assets in the plan.Workers who exhibit less attachment to the workforce may be less likely to become covered and participate in the plan. Even if a worker earns pension benefits, he or she must make decisions that determine whether these savings will contribute to their standard of living in retirement. When workers become eligible to receive distributions from a plan—either preretirement or upon retirement—they are faced with a choice of whether to preserve the distribution in a form that could provide income over their remaining lifetime, such as by transferring the funds to an Individual Retirement Arrangement (IRA) or choosing an annuity. The option of cashing out a lump sum distribution from a pension plan without rollover to an IRA raises concerns about future retirement income. Lump sum distributions can have advantages, because they allow flexibility for workers who have high-priority needs such as medical treatment, purchasing a home, or investing in a business. Lump sums distributions may also make sense when the amount is small and can be invested more profitably elsewhere. The potential disadvantage of lump sums distributions is that the assets may not be preserved for retirement income, as would be the case with a rollover to an IRA or purchase of an annuity. However, the importance of the lump sum issue to retirement income adequacy is the subject of debate and continuing research. Some see a problem given the number of workers taking preretirement lump sums without rollover to an IRA. However, some research has concluded that the impact of this practice on retirement income is very small, since these workers tend to have small account balances. Other research shows that larger sums generally are preserved through rollover into an investment account and that the proportion of workers cashing out lump sums is declining. Under a voluntary private pension system, the linkage between work, pension coverage, and the receipt and level of pension income in retirement is complex and depends on an array of factors, such as employer plan sponsorship and plan design, the framework of government rules, and worker decisions and choices over a lifetime. The result of employer and worker interactions in the marketplace is that not all workers will earn and receive pension income in retirement. Research suggests that some of the demographic characteristics of those who lack pension income in retirement are similar to the characteristics of workers who lack pension coverage during their working years. For example, those without pension income in retirement are more likely to be single, to be women, and to have low levels of education. But data on pension coverage are only a partial indicator of future pension receipt. The receipt of pension income involves factors that span a worker’s career, and it is difficult to predict whether any particular worker currently in the labor force will ultimately receive a pension benefit. However, available research suggests that those who accumulate no pension income, or relatively low pension income, are more likely to include the following: Workers employed by small firms. Compared with medium and large employers, small employers are less likely to sponsor a pension plan. As table 1 shows, the pension sponsorship rate drops dramatically as firm size gets smaller—86 percent of firms employing more than 1000 workers offer pensions, while only 13 percent of firms with fewer than 10 employees offer pensions. Figure 2 illustrates that worker participation in pension plans is lower for those employed by small firms. Workers employed part time or part year. Employers are less likely to provide pension coverage to part-time, seasonal, and contingent workers. For example, recent data show that about 60 percent of workers employed full time and year round have some form of pension coverage, but only 21 percent of part-time workers have pension coverage. Workers with low earnings. Low earners are less likely than middle and high earners to be offered a pension plan and participate when a plan is offered. As figure 3 shows, pension participation varies by earnings levels ranging from over 70 percent participation for the top earning group to about 30 percent for the lowest earners. For those who are participants, some plans that are integrated with Social Security permit a reduction in pension benefits for the lowest earners to offset their proportionally higher Social Security benefits. Workers who frequently change jobs over the course of a career. Even “covered” workers who frequently change jobs can fail to accrue pension wealth for a significant fraction of their working lives owing to eligibility rules or to vesting rules and the resulting forfeiture of nonvested contributions or accruals. In addition, under defined benefit plans, the annual benefit accrual may be small relative to that for longer-service workers because of the age- and service-weighted features used in these plans. Finally, many plans provide for lump sum cash-outs of accounts or accruals, which often are not rolled into other retirement savings vehicles. Workers who place little value on saving. Some workers, either by preference or from lack of knowledge, may not be predisposed to saving or to committing to saving over the long term for retirement. The determinants of saving behavior are not completely understood, but it appears that inadequate retirement saving occurs at all income levels (see app. 2). Concerns remain about the ability of workers with these characteristics to earn pension income and receive it in retirement. The federal government has several policy tools to provide incentives for expanding pension coverage, and various reforms to pension rules have been enacted with the aim of protecting and improving pension benefits for workers. Efforts to further improve coverage and benefits generally involve incremental reforms within the existing framework of the voluntary pension system. Traditional reforms to the voluntary, single-employer-based pension system may have limited potential to significantly expand pension coverage and improve benefits for workers who traditionally lack pensions. Reforms aimed at encouraging plan sponsorship have focused on improving tax incentives and reducing the burden of pension regulation on small employers, but the effect of reforms aimed at increasing pension sponsorship and coverage may be offset by other policy actions. Also, numerous proposals attempt to directly affect pension coverage and benefits by revising the framework of rules governing pensions. Past reforms to these rules, such as improved vesting, and trends in plan design, such as the enhanced portability and accrual patterns associated with defined contribution plans, suggest that more workers and their spouses could receive pension income in the future. But the responses of employers and workers to further rule revisions may offset some of the revisions’ intended effect. Some analysts question whether additional reforms to the voluntary, employer-based pension system can significantly expand pension sponsorship and increase coverage for workers traditionally lacking pensions and improve benefits for workers with pensions. Much of the pension policy debate is concerned with the issue of how to increase pension plan sponsorship, particularly among small employers, as a basis for fostering increases in worker coverage and participation and for providing opportunities to earn pension benefits. The major policy tools to encourage pension sponsorship include increasing the tax preferences for pensions and simplifying pension regulations, and these tools are aimed at making it easier for employers to decide to sponsor plans. Tax incentives are an important tool to encourage employers to provide pensions. The success of tax incentives to encourage pension sponsorship has been questioned, however, in part because data show that only about half of the workforce is covered by a pension. At least two important factors may limit the effect that tax incentives provide for pension sponsorship. First, tax regulations limit employers’ ability to direct tax preferences to the higher-paid employees who likely most value pensions. As a result, recent pension reform efforts typically have been aimed at relaxing these limits on pension tax preferences. Second, marginal tax rates have been lower in recent decades, which may have reduced the value of pensions to workers and thus the incentive for employers to sponsor or expand pensions. The progressive structure of income tax rates, that is, levying higher marginal tax rates as income increases, makes the benefits of the tax preference for pensions relatively greater for higher-income workers who pay higher marginal tax rates than for lower-income workers. Thus, this tax preference provides an incentive for owners and officers of firms to sponsor a pension plan for themselves and their higher-income employees. In turn, because sponsors may also want to provide pension benefits for other workers in the firm, and because pension law encourages plan sponsors to extend pensions broadly to their work force, these tax incentives may result in increased worker coverage. Some pension regulations, such as contribution limits and nondiscrimination rules, are designed to limit the use of tax preferences and to ensure that they do not benefit specific groups of workers, typically the higher paid, disproportionately; however, these regulations may reduce the incentive for employers to offer pensions. As these pension rules are made more stringent, the incentive may be further reduced. Relaxing limits and nondiscrimination rules is viewed by many employers as improving incentives to sponsor and expand plans. While such changes may lead to increased retirement savings by some workers, it is not clear whether they can significantly improve pension coverage and benefits for workers who traditionally lack pensions. Workers advocates may also view such changes as reducing the equity with which pension benefits are provided among workers. In addition, during the last two decades, marginal income tax rates have been lowered, which may have reduced the tax incentive to sponsor pensions. Reagan and Turner studied the pattern of marginal rates during the 1980s to determine whether decreases in marginal tax rates have reduced pension coverage. They found that, on average, a decrease of one percentage point in the marginal tax rate is consistent with a decline of 0.4 percentage points in the worker coverage rate. Thus, they conclude that declines in marginal tax rates appear to have lessened the incentives for plan sponsorship. Some reforms have sought to simplify the regulations imposed on qualified pension plans, so that business owners will be more likely to sponsor plans. Government involvement in pensions generally seeks to promote protection of employee benefit rights. Over time, however, with the enactment of new legislation and subsequent regulations, pensions have become more complex and costly to administer. Employers often argue that the burden of complying with pension regulations is excessive to the point of discouraging plan sponsorship, thus limiting the opportunity to increase coverage. The cost of sponsoring a pension plan can be an important deterrent to sponsorship in the small business sector. As a result, there have been calls for “pension simplification” to reduce the administrative complexity and cost of pensions while retaining the flexibility to design pensions that meet employers’ needs. Proposed solutions generally involve reducing or eliminating various requirements with which sponsors must comply.Worker and public policy advocates, however, seek plan designs that improve worker coverage and benefits. Policymakers have sought to balance these competing demands by adopting reforms that reduce the legal and regulatory requirements on plan sponsors if they adopt specific plan designs that expand coverage to more workers and specify employer contributions. Two examples of pension simplification reforms are the creation of the Simplified Employee Pension (SEP) and the Savings Incentive Match Plan for Employees (SIMPLE). Created in 1978, a SEP is essentially an IRA that an employer provides to each eligible employee. The employer is subject to minimal reporting requirements and is not subject to nondiscrimination rules. Although employers are not required to contribute to an employee’s SEP, when employer contributions are made they must be distributed as a uniform percentage of pay to all employees. In 1996, Congress also instituted a new plan design, SIMPLE, that allows workers to defer a portion of their salary. While SIMPLEs are also exempt from certain nondiscrimination rules and reporting requirements, the employer must match the employee’s elective contributions according to a specified formula or provide a 2 percent contribution for all eligible employees. Although plan designs such as SEP and SIMPLE offer some potential for increasing small business plan sponsorship, it is not clear that this general approach to pension simplification can make significant strides toward increasing plan sponsorship further among small employers or increasing worker coverage in that sector. Surveys indicate that some small employers remain unfamiliar with the availability of simplified plan designs. Moreover, the relief from many requirements and the benefits offered by such alternatives may not be sufficient to offset the cost or burden of offering them, and small employers may still be unwilling to sponsor plans given business conditions or worker preferences. In addition to reforms aimed at increasing pension plan sponsorship, various reforms attempt to improve pension coverage and benefits by modifying the framework of rules governing pensions and the process that workers must navigate in earning pension income. Past and proposed reforms to eligibility, coverage, and participation provisions attempt to increase the number of workers who have the opportunity to participate in a pension plan, particularly workers who tend to have lower earnings. Reforms to vesting provisions could provide another means of helping workers gain the opportunity to earn pension income and possibly increase the total amounts that they accrue. Similarly, reforms to the regulatory provisions that set conditions on plan benefit designs, such as limits and nondiscrimination rules, as well as more direct specification of allowable plan designs, could affect how much workers accrue through their pension plans. Reform proposals that affect the distribution of accrued pension benefits tend to revolve around the issue of preretirement lump sums and whether they will contribute to workers’ retirement income. Another issue arising from the trend toward defined contribution plans concerns the choices that workers will make regarding their investments and whether they will preserve their accumulations to provide lifetime income in retirement. But it is not clear whether most of these reforms can significantly affect coverage or benefits because of offsetting factors associated with employer or worker behavior. Plan eligibility provisions allow the employer to limit participation among younger workers or among those who do not work full time; further restrictions on these provisions could provide these workers with the opportunity to participate in a plan. However, employers may have little incentive to extend eligibility to workers with generally higher turnover, and changing these provisions could raise compensation costs or conflict with worker preferences in compensation. Because pension plans are defined for specific employee groups, job locations, or job categories, requiring employers to expand coverage and give greater numbers of workers the opportunity to participate in a plan may be difficult. As a result, direct efforts to improve coverage may focus on the level of a worker’s compensation by requiring that plans cover more workers who are not highly compensated. This is typically accomplished by modifying nondiscrimination rules (minimum coverage rules) or nondiscrimination testing rules. But improving coverage in this manner could conflict with the desire of the employer to design its plan to meet business needs and to direct compensation to its most valued employees. Participation reforms seek to ensure that workers who historically have had low participation rates, such as low-income workers, participate in pension plans. Some proposals to encourage participation in 401(k) plans would automatically enroll workers at the time of employment and would require them to choose to opt out of the plan if they so desire. Some plans have instituted such provisions, and research suggests that automatic enrollment does increase participation. Research has also shown that individuals enrolled in this way tend to exhibit inertia with regard to the amounts that they contribute, staying with the default contribution rates and, in their investment choices, staying with conservative investments such as money market funds. One automatic enrollment plan design, where workers agree to save a portion of their future salary increases, has shown promising results. Vesting reforms seek to give workers rights to their pension accruals more quickly by making vesting periods shorter or even immediate. Previous reforms to vesting requirements appear to have substantially improved the percentage of plan participants who are vested. Also, the movement toward 401(k) plans, which have immediate vesting of employee contributions, helps address concerns about younger and higher-turnover workers. Also, EGTRRA provides for faster vesting of matching employer contributions. From an employer’s perspective, shorter or immediate vesting can increase the cost of providing pensions. As a result, the scope for further improvements in vesting may be limited, because employers might prefer to retain or simplify the existing rules and the flexibility that these rules provide to design pensions to meet business objectives and limit compensation costs. Some workers, such as those with lower earnings or who change jobs frequently, are less likely to earn pension benefit accruals. Improving accruals for mobile workers generally means smoothing out the accrual pattern across the factors that are important in a defined benefit plan, namely, age, length of service, and salary. For example, granting higher accruals for early years of service and smaller accruals for higher tenure could foster the goal of providing higher accruals to the lower-paid, shorter-service workers. To some degree, the movement toward defined contribution and cash balance plans has alleviated concerns about greater accruals for these types of workers. Other means of inducing more even accrual patterns could include strengthening nondiscrimination rules by altering the tests to encourage greater accruals for individuals who are not highly compensated. Consistent with the theme of pension simplification, some reformers suggest that the pension system should allow fewer plan designs. However, the goal of providing more even accruals for all workers can conflict with the desire of employers for flexibility in benefit design and their ability to direct compensation to their most valued employees. Preservation reforms address the issues of preretirement lump sum distributions and spousal rights in defined contribution plans. Workers who roll over a lump sum distribution into an IRA or another defined contribution plan can preserve the funds in a tax-deferred arrangement; this may provide more assurance that the pension saving will be preserved for retirement. As a result of concerns that lump sums may be consumed rather than saved, proposals have been made to place more restrictions on them. One option is to increase the penalty for not rolling the funds over into an IRA or another qualified retirement plan. Another option is simply to require that the funds be rolled over. EGTRRA generally requires a rollover to be automatic unless a participant elects a lump sum. This provision will go into effect when regulations are finalized by the Department of Labor. Such measures could improve benefit preservation, but some research suggests that greater restrictions on the use of lump sums may decrease workers’ willingness to participate in 401(k) plans. Another important issue concerns the rights of spouses regarding distribution from defined contribution plans. While defined benefit plans are required to offer an annuity with a provision that the spouse be able to approve the form of distribution, defined contribution plans are not generally required to offer an annuity option. Providing such an option could affect the cost of administering the defined contribution plan. Key factors that affect workers’ benefit security during the preretirement period involve the prudent investment of pension assets and workers’ decisions about distributions from their plans. Pension plans are protected by ERISA fiduciary rules, and most defined benefit plan participants’ benefits are protected by PBGC pension insurance. Although defined benefit plans are subject to a rule that no more than 10 percent of plan assets can be invested in the securities of the employer, this rule does not apply generally to defined contribution plans. In the past and more recently, proposals have been made to apply restrictions on employer stock to all defined contribution plans or specifically to 401(k) plans, with the aim of reducing the risk that participants may bear. However, restrictions on investment in employer securities could reduce opportunities for workers to earn retirement income and make it less attractive for employers to contribute matching funds to 401(k)s. The trend toward defined contribution plans and increasing individual responsibility for retirement raises a general concern with regard to whether workers have sufficient knowledge and information regarding retirement planning and such matters as the investment of plan assets, preserving distributions prior to retirement, and assuring that income will be available throughout the retirement period. Some proposals would allow employers to provide plan participants with investment advice regarding the participant-directed assets in their 401(k) plans from financial service firms that administer such plans. However, concerns have been raised that such proposals would not adequately protect plan participants from potential conflicts of interest by investment advisors who also provide other services to their plan. Some pension plans are already acting to ensure that their participants have access to necessary information. The growth of 401(k) plans, increased amounts of information provided through financial and insurance entities, and general economic and social trends may be encouraging workers to increase their knowledge about saving, investment, and retirement. Also, new strategies for improving worker knowledge about retirement planning are being examined. Although a variety of reforms attempt to encourage plan sponsorship and improve pension coverage and benefits, several analysts note that the ability of the voluntary, employer-based pension system to significantly expand pension sponsorship and extend coverage to workers may be limited. In particular, one study concluded that, at best, legislative changes are capable of extending coverage to a quarter to a third of uncovered workers, with actual results likely to be considerably lower.Consistent with such results, some question whether additional reforms will have significant results for workers who traditionally lack pensions, particularly those with low incomes, since these reforms offer only incremental changes to the voluntary, single-employer pension system. As a result, some reformers suggest proposals that move beyond the voluntary, single-employer private pension system. Three broad categories of reform approaches outside the single-employer, voluntary pension system have been advanced to improve worker coverage and retirement income. These categories are (1) pooled employer reforms, (2) universal access reforms, and (3) universal participation reforms. Pooled employer reforms focus on increasing the number of firms offering pension coverage through centralized third-party administration. Pooled employer plans aim to increase worker coverage and improve pension portability, but there are limits to the receptiveness of employers to pooled employer plans given the employer’s loss of control of plan design and concern with cost and administrative requirements. Universal access reforms attempt to increase retirement savings by making payroll retirement saving accounts available to all workers without mandating an employer contribution. However, these reforms raise concerns about the administrative burden placed on employers and, because the reforms rely on employee contributions, about the difficulty faced by workers, particularly low-income workers, in setting aside money for retirement. Universal participation reforms are intended to ensure coverage and retirement income for all workers by mandating pension availability and participation, similar to the existing Social Security system. Reforms based on universal participation raise concerns about increases in employer administrative burden and because of their broad potential economic effects on labor cost. Table 2 provides examples of these three approaches. Existing pooled employer plans, which include multiemployer and multiple-employer plans, cover about 12 percent of all pension plan participants. Proposals advancing the pooled employer model promote establishing these plans in more industries and encourage small employer membership. Advocates of pooled employer plans maintain that the advantages of the plans’ portability, their industry or trade focus, and their low administrative cost make them a viable approach for increasing pension coverage, particularly to employees of small businesses. Others contend that little incentive exists for employers to join a pooled employer plan, as they must sacrifice control of plan design and costs. In the view of these critics, existing alternatives such as 401(k) plans offer portability and low administrative cost and are even easier to administer. Collectively bargained pooled employer plans exist already in many industries and trades. These multiemployer plans, in which participants can negotiate the plan characteristics, must be jointly governed by management and labor representatives. Since their inception in 1929, these plans have been advanced by labor unions and have developed a variety of benefit structures. Usually, multiemployer plans provide pension coverage to labor union workers from the same industry or trade. Although most are defined benefit plans, multiemployer defined-contribution plans do exist, and hybrid models have developed where the employer’s contribution and the worker’s benefit are both specified. Non-collectively-bargained pooled employer plans, or multiple-employer plans, also exist and are normally administered by a professional or trade association. For example, the Teachers Insurance Annuity Association and College Retirement Equities Fund (TIAA-CREF) offers a multiple- employer plan organized around education and research professions. Employers, such as member colleges and universities, make contributions for their employees. TIAA-CREF offers a defined contribution plan, in which contributions are accumulated over a career and paid out at retirement, often as an annuity. Proposals advancing pooled employer plans would include both proposals that would facilitate collectively bargained plans and proposals that would advance development of professional and trade association plans. One proposal would create a model small-employer group pension plan with minimal administrative responsibilities. Other proposals would provide tax incentives to employers to encourage participation in pooled employer arrangements. Another proposal would make changes in income tax law to allow professional and trade associations to be treated as employers for purposes of sponsoring pooled employer pensions or health plans for their members. Advocates of pooled employer plans reason that both employers and employees benefit from the portability and trade focus of this arrangement. The portability of the plans improves worker pension receipt by allowing short-service workers to accumulate pension benefits with different employers. This portability diminishes the effects on pension accruals of company ownership changes and failures, because workers can continue to participate with new or reorganized employers. The trade focus enhances the advantages of portability, because even though workers may change employers, many stay in the same industry or trade. Similarly, employers benefit by having a pool of workers with previous work and training in their industry or trade, and pooled employer plans are likely to have pension features, such as early retirement provisions, to meet the needs of a common industry or trade. Advocates also note that workers in small business, in particular, could benefit from the pooled employer model because small employers generally have high rates of employee turnover and high business termination rates. By lowering the cost of administering a pension plan, pooled employer plans also offer employers a more cost-effective way of providing pensions to their employees. Because they provide economies of scale and reduce employer costs, such plans are easier for some employers to offer. Advocates note that pension administrative costs per employee are normally higher for small employers who have smaller numbers of workers over which to spread implementation and administrative costs. Pooled employer pension plans spread these costs over a larger number of workers. Despite these possible benefits, some pension experts have expressed doubt that pooled employer models can be widely expanded beyond current levels, because pooled employer plans are still dependent on voluntary employer action. They note that pooled employer plans have been available for many years, yet small businesses have shown little interest in them. Employers may be less likely to adopt pooled employer plans, because they have little control over plan design and are less able to assure that the plan meets their needs. Further, little evidence exists that proposals such as employer tax credits will lead to adoption of pooled employer plans by businesses without pension plans. Moreover, employers may have little incentive to choose a pooled-employer defined-benefit plan instead of a single-employer 401(k) plan, which also is portable and offers low administrative costs. Recognizing that many employers do not provide pension plans to workers and that some employees with coverage need additional retirement savings, some analysts and policymakers embrace reforms to assure universal access to tax-favored retirement savings accounts such as IRAs.Although legislation has created different IRA types and provisions, workers generally establish IRAs outside the workplace. Proposals that would expand universal access accounts beyond IRAs vary in coverage and in incentive features such as tax credits to encourage employer or employee participation. Many of these proposals seek to provide employees with a payroll-based opportunity for retirement saving. Some form of IRA is currently available to all workers. ERISA introduced the IRA in 1974 as a means of promoting retirement savings for workers without employer-sponsored pensions. Since then, legislation has modified provisions and created new types of tax-advantaged IRAs. Today, traditional IRAs can be purchased with pretax dollars if a person is not covered by a pension plan or if his or her income is less than specified amounts. IRAs can also be purchased with after-tax dollars, regardless of income. For these traditional IRAs, earnings are taxed as income at retirement. Reforms advancing universal access accounts aim to facilitate increased retirement saving. To increase the likelihood of worker participation, most proposals call for payroll-based accounts. Some would offer universal access accounts to employees regardless of other pension coverage; others would apply only to employees without pension coverage. Some proposals would require employers to establish the accounts, while other proposals would make the accounts available at an employer’s or employee’s election. Also included in proposals is the option of a government-managed payroll account as an alternative for employers, particularly small employers, who want to minimize their administrative involvement with employee accounts. To encourage employee saving, some proposals include incentives such as tax credits and matching of employer contributions. Advocates of universal access accounts reason that requiring such accounts would facilitate employee and employer contributions even without a required employer contribution. They reason that workers are more likely to routinely set aside retirement savings when they have a payroll deduction account and when they receive employer contributions to that account. Further, employers may be more likely to make contributions when there is an existing account. IRA experience may be useful in predicting the effects of universal access accounts. Although an estimated 42 percent of households owned some type of IRA as of May 2001, evidence suggests that IRAs serve more as a parking place for distributions from other tax-qualified retirement savings plans than as accounts for active retirement saving. Rollover contributions from other tax-qualified retirement accounts are estimated to represent more than 90 percent of current IRA contributions. A study of a large sample of individual tax returns found that only about 5 percent of individuals reporting income made a contribution to an IRA in 1995. Studies show that low-income workers have the lowest rate of IRA saving and that the rate of contributions to IRA accounts rises as incomes rise.In 1995, only one percent of those with income of less than $10,000, compared with 17 percent of those with income more than $100,000, contributed to an IRA. Observers note that the low rate of IRA saving by low-income workers is not surprising in that low-income workers have the smallest amount of disposable income for saving. Further, low-income workers obtain the least tax savings from tax-deferred treatment, because they pay the lowest marginal tax rates. However, universal access account proposals that include tax credits or matches by the government or employer, based on the contributions of the worker, attempt to improve saving incentives for lower earners. Critics of universal access reform proposals argue that universal access accounts are not the best way of increasing retirement saving. They suggest that such proposals may increase the administrative burden on employers, particularly small employers, and create numerous small accounts with relatively high administrative expenses. Experts disagree about whether 401(k) plan accounts or IRA accounts have increased personal saving. They note that lower-income workers face lower tax rates and therefore benefit less from the tax-deferred nature of the accounts. In addition, these critics note that such plans shift investment risk to the individual and that lower-income workers have little investment management experience. Some are concerned that individual accounts could supplant existing private pensions, resulting in employers’ feeling less need to offer traditional pension benefits and leading to a possible drop in national saving. The proposals also entail substantial design challenges to ensure that universal accounts are effectively implemented and administered. These challenges include determining how records would be kept, what investment options and controls would be offered, and when workers would gain access to savings in the accounts. Although reforms requiring universal participation in a pension system are aimed at improving workers’ retirement income, concerns exist about the broad economic effects of such reforms. Three primary types of reform employ universal participation: (1) reforms mandating private pension coverage in addition to Social Security, (2) reforms increasing base-level Social Security benefits, and (3) reforms establishing mandatory Social Security individual savings accounts. Mandatory pension proposals differ in specific provisions but generally require pension coverage and employer contributions for all employees. Under mandatory pension proposals, employers would be required to establish pension accounts and make contributions for workers. Proponents of these reforms suggest that mandatory pensions would increase private retirement saving, particularly for low-income workers, and would take advantage of the existing private pension infrastructure. Proposals mandating employer pensions aim to provide retirement income as a second tier to Social Security, but critics suggest that if these proposals are implemented, they may have adverse impacts on the national economy because of the increased cost of labor and potentially increased layoffs. Several mandatory pension proposals have been suggested. For example, the 1981 report from the President’s Commission on Pension Policy recommended an advance-funded minimum universal pension system (MUPS). The commission recommended that employers establish pension accounts for all employees and contribute a minimum of 3 percent of pay annually. The MUPS proposal required immediate vesting and prohibited integration with Social Security. Under MUPS and other mandatory pension proposals, employers would be required to establish pension accounts and make contributions for workers. Another, more recent proposal required employers to provide uniform pension coverage for all employees in a given line of business but allowed for workers with income below a certain threshold to be excluded from employer-sponsored coverage and to instead receive their retirement income from the government. To help ensure employer participation, this proposal offered increased employer flexibility in benefit and contribution limits. Proponents of a mandatory pension system reason that mandatory pensions can take advantage of the existing private pension infrastructure and increase national saving by providing a retirement saving mechanism to more workers. Low- and moderate-income workers represent a disproportionate share of those without pensions, so mandating pension coverage would increase the retirement incomes of these workers, who generally lack retirement income other than Social Security. Because of the low rate of retirement and other savings, particularly for lower-income workers, some proponents of a mandatory pension system believe that mandating pensions would increase personal retirement savings. Mandating pensions would increase pension coverage provided by small employers, where it has been difficult to increase coverage. In addition, a mandatory pension system could take advantage of the existing private sector pension system infrastructure. However, critics of mandatory universal pension proposals suggest that such plans may adversely affect both employees and employers. Mandatory pensions may require workers to receive compensation in the form of pension benefits when they might prefer cash wages, which may be a particular concern of low-income workers. Mandatory pensions would reduce workers’ ability to allocate earnings to other valuable uses, such as health insurance, housing, and education. Employees with current pension coverage could be adversely affected if employers chose to reduce benefits to the mandatory minimum. In addition, mandatory pensions could have negative consequences for employers, increasing employers’ costs for pension implementation, administration, and contributions. Mandatory pensions could also restrict employers’ ability to design pensions to meet their business objectives. Such reforms raise concerns about the increase in employers’ administrative burden, as well as potential adjustments to other forms of compensation to offset higher pension costs. Some analysts acknowledge that extending pension coverage and benefits to workers by making the voluntary system mandatory is a difficult option and that it may make more sense to simply modify the existing mandatory Social Security system. One proposed reform involves raising the base level of Social Security retirement benefits. Such a proposal attempts to increase Social Security benefits for low-earning workers, recognizing that they generally lack pension income, have very little retirement savings, and are therefore dependent on Social Security. Proponents of such a proposal cite the simplicity and low administrative cost of increasing the base level benefits, but concerns remain about the potential impact of this approach when a Social Security financing shortfall already exists. Proposals to raise the base level of Social Security benefits try to offset the effect on retirement income of low wage, part-time, or seasonal employment as well as periods of unemployment. These proposals would raise Social Security benefits so that low earners would receive higher replacement of preretirement income. Proposals have different ways of providing the higher benefits for low earners. One option is to revise Social Security’s minimum benefit provision. Other options would change the benefit formula for specific workers, and others would count unemployment insurance payments and the Earned Income Tax Credit (EITC) as earnings in computing Social Security benefits. Proponents of increasing base-level Social Security benefits cite the simplicity of using the existing, relatively efficient Social Security system to compensate for the lack of pensions and retirement savings of many low earners. They reason that the workers who would benefit most from this change are those with the least retirement savings and the greatest dependence on Social Security. Critics of these proposals suggest that raising the base benefit level may detract from Social Security’s financial integrity and popular support. Increasing Social Security benefits, even for a limited segment of retirees, would further compound the existing shortfall in Social Security financing. Restoring solvency in light of these benefit increases may require reducing benefits to workers with higher earnings or increasing worker and employer contributions. Some fear that such adjustments might cost the program the support of these higher-income workers, if Social Security came to be viewed as a welfare program. Moreover, increasing Social Security benefits may have implications for private pensions, making employers less likely to want to provide pension benefits for their lower- earning workers. Some current efforts to reform Social Security financing call for the establishment of individual Social Security savings accounts. These proposals seek to partially replace the current pay-as-you-go financing of Social Security in which current contributions are generally used to pay current retiree benefits. Advocates of these proposals suggest that such accounts would increase overall worker retirement income with higher market investment returns and would provide greater worker control of retirement savings. However, critics question whether individual accounts can increase retirement income, and they counter that low-income workers would benefit the least from such accounts because they have relatively little to contribute and modest investment experience. Individual account reform proposals vary, but they generally allow workers to own and, to varying degrees, manage their own accounts. The proposals would create individual accounts in different ways. Some would finance individual accounts with new contributions, while others would allocate some portion of the current Social Security taxes to fund the accounts. Still others would allow supplementary voluntary contributions to mandatory individual accounts or be based completely on voluntary contributions. Most proposals retain some features of the current Social Security system. One hybrid proposal would completely redesign the Social Security program into a two-tier program, with the second tier consisting of an individual account. Proponents of Social Security individual accounts maintain that such accounts allow workers to invest a portion of their contributions and, with the returns, to fund future retirement benefits. Advocates of Social Security individual accounts point to the potential for increased returns for participants that could result from allowing investment in stocks and bonds. Some advocates indicate that in addition to offsetting the need to raise payroll taxes or cut benefits to restore financial solvency to Social Security, individual accounts could eventually increase the overall retirement income of future retirees. Furthermore, Social Security individual accounts could provide an administrative infrastructure for other retirement savings plans, such as plans based solely on employee payroll deductions. Workers might also become more inclined to contribute an increased portion of their wages to retirement savings if such plans were available. Advocates therefore reason that Social Security individual accounts could increase private and national saving and lead to more capital formation. Individual Social Security accounts also have critics. Critics of individual accounts point out that investing in stocks and bonds introduces investment risk that could, in certain cases, result in lower retirement income. Moreover, they argue that individual accounts are unlikely to restore Social Security’s solvency without the need for additional financing through tax revenues, benefit reductions, or government borrowing. Concerns have also been raised about the impact on benefits, in that lower-income workers would have fewer funds going to their individual accounts and would have the least investment experience. Finally, concerns have been raised that employers may redesign their pensions or drop pension coverage if they feel that Social Security individual accounts allow workers to accumulate adequate retirement income. The concern about the low rate of private pension coverage among certain segments of the workforce and the desire to improve pension and retirement income, particularly for lower earners, has led to various proposals to reform the existing voluntary employer-based system, as well as some proposals that move outside that system. However, each type of reform introduces issues that make the likely effects of reform difficult to determine. For example, under the existing system, the effect of policies aimed at improving incentives for plan sponsorship through the tax system or by simplifying pension rules may be limited by other policy actions. The intended effects of changing pension rules may be counteracted by the responses of employers and workers. As a result, additional reforms to the voluntary, single-employer-based system have only a limited ability to significantly expand pension sponsorship and extend coverage and benefits to workers who traditionally lack pensions. In considering proposals that move outside the voluntary, single-employer system, employers may find long-standing proposals, such as those that would expand pooled employer arrangements and mandate private pensions, unattractive in part because they may increase compensation costs. While raising the base level of Social Security benefits might be an effective means of addressing some of the concerns about lower-earning workers, such a reform would need to be considered as part of the broader Social Security financing reform discussion. Several pension- related proposals aimed at improving the availability and level of retirement income for lower-earning workers are similar in many respects to current proposals to introduce an individual account-based option into Social Security. The infrastructures of private pensions or Social Security could be modified to provide a universal, payroll-based opportunity to save for retirement. While many lower-earning workers may have difficulty saving out of current income, supplementing a worker’s account through tax credits and contribution matches might increase saving incentives among those with low levels of income and retirement wealth. Such approaches entail cost and design challenges, but it is important to recognize the relationship between concerns about private pension coverage and benefits, and the Social Security policy debate, in any retirement policy reforms that emerge. The outcome of reform efforts will define a new balance between voluntary and mandatory approaches to providing retirement income. We provided draft copies of this report to the Department of Labor and the Department of the Treasury for their review. The Department of Labor had no comment on the report. The Department of the Treasury provided us with technical comments, which we incorporated as appropriate. We are providing copies of this report to Secretary of Labor Elaine L. Chao, Secretary of the Treasury Paul H. O’Neill, and appropriate congressional committees. We will make copies available to others on request. The report is also available on GAO’s home page at http://www.gao.gov. Please call me on (202) 512-7215 or George A. Scott on (202) 512-5932 if you or your staff have questions. Other major contributors to this report include Kenneth J. Bombara, Timothy Fairbanks, Edward Nannenhorn, Corinna Nicolaou, Roger J. Thomas, and Charles Walter III. The purpose of this appendix is to show (1) how the tax treatment of saving through a qualified pension plan differs from the tax treatment of saving in a regular bank savings account, (2) how the magnitude of the difference depends on the tax rates individuals face, and (3) that the tax treatment of pension saving can be equivalent to exempting the earnings on pension contributions. If a person’s employment compensation is paid as wages, those wages would be taxable income. If he or she then saves some of these wages in a regular bank savings account, the income earned in the account would be taxable each year as it is earned. When funds are withdrawn from the account, no further tax would be owed. If the same employee receives compensation in the form of a contribution to a qualified pension plan, that pension contribution would not be counted as income to the employee at the time of the contribution. In addition, earnings on the contribution would accumulate tax deferred. When the contributions and earnings are withdrawn or distributed, they would be subject to tax at the regular income tax rates applicable at that time. Table 3 shows a hypothetical example of how the tax treatment afforded to pensions can benefit savers. It also shows how the tax benefit from saving in a pension depends on a person’s income tax rate. The example in this table supposes that two people are subject to different tax rates, one to a 15-percent tax rate and the other to a 28-percent rate, throughout their lives. Both receive a higher after-tax return from saving through a pension than they would have received in a regular taxable account. In both cases, the value of their pension accounts at retirement is greater than the value of their regular savings account at the time funds are withdrawn. This reflects the effect of taxes not paid at the time of the initial deposit in the pension account and taxes not paid on the earnings in the pension account over time. Despite the fact that both individuals have to pay tax on the value of the pension account when the funds are distributed, while no additional tax is owed on the funds in the regular saving account, both individuals gain by saving through the pension instead of the regular account. Table 3 also shows that the person with the higher, 28-percent tax rate benefits more from saving through a pension, compared with a regular savings account, than the person with the lower, 15-percent rate. The example in table 3 assumed that the lifetime tax rate—when contributions are made, as earnings accrue, and when funds are withdrawn or distributed—remains constant. When tax rates vary over time, the tax benefits from saving through a pension are greater if the rates that are applicable when contributions are made and as earnings accrue exceed the rates applicable when the funds are withdrawn. In other words, if the tax rate during a person’s working life is higher than the tax rate during retirement, the tax benefits from pension saving will be greater. Conversely, if tax rates are higher during retirement than during a person’s working life, the relative tax benefits are smaller. When tax rates are low during a person’s working life and much higher during retirement, the person might be better off saving in a regular taxable account. Another way to look at the tax treatment of pension savings is to compare it with that of an account in which contributions are taxable but no further tax is owed on earnings. In a Roth IRA, for example, wages are subject to tax when they are earned, but any account earnings can be permanently exempt from tax. Table 4 shows that if tax rates remain constant over time as in the example underlying table 3, the after-tax return from saving through a pension can be equivalent to saving through a Roth IRA. Currently, an active research debate is addressing the questions of whether workers and households will achieve adequate retirement income and the role that pensions play in retirement income. Data are generated for the current retired population, and estimates are made for those who will retire in the future. The current status of retirees is typically examined through comparisons with the poverty line or with replacement rates, which relate actual or expected retirement income to the income level at a period of time during the worker’s career. The status of future retirees also can be assessed through estimates of such measures but is increasingly examined in the context of whether workers are accumulating sufficient assets while working (i.e., saving) to assure themselves of a stream of retirement income adequate to meet certain standards or targets. Data on existing retirees recently presented by GAO suggests that those without pension income in retirement are more likely to be in poverty. In 1998, about 4.2 million of 36.6 million retired persons, or 11.5 percent, had total retirement incomes below the poverty line. In addition, about half of those retired (17.6 of 36.6 million) reported that they did not receive income from a pension of their own or from that of a spouse. Of those not receiving pension income, about 21 percent had retirement incomes below the federal poverty line; of those who did receive some pension income, only 3 percent had incomes below the poverty line. Furthermore, the study noted that some of the characteristics of those who lack pension income in retirement are similar to the characteristics of workers who lack pension coverage during their working years. For example, those without pension income in retirement are more likely to be single, to be women, and to have low levels of education. However, it is not possible to predict whether any particular worker currently in the labor force will ultimately receive a pension benefit. That is, the linkage between work, pension coverage, and the receipt and level of pension income in retirement is complex and depends on an array of factors, such as employer plan sponsorship and benefit design, the framework of government rules, and worker decisions and choices over a lifetime. Data on the status of current retirees also focuses on the replacement rates that are provided via Social Security and pensions. Typically, pension professionals suggest that a worker or family needs approximately 65 to 85 percent of preretirement income to maintain the preretirement living standard. The achievement of this level of income replacement depends significantly on Social Security and pension income and may require income from other sources, such as earnings from employment, home equity, and nonpension saving. Studies show that many workers need to save for retirement beyond the income they can expect from Social Security and pensions. Owing to the tilt of Social Security benefits toward lower earners, it follows that those in lower earnings categories generally need to save proportionately less than those in higher earnings groups to reach an adequate replacement rate. At the same time, workers in lower earnings categories are less likely than higher earners to have pension income in retirement. Research has also focused on the question of whether future retirees will have adequate retirement income. In the early to mid-1990s, a number of research studies engaged the retirement income adequacy question and reached different conclusions. Studies by Andrews and the Congressional Budget Office (CBO) reached generally positive conclusions concerning the retirement income status of future retirees. Research by Bernheim reached less optimistic conclusions, finding that a broader range of workers were not saving sufficiently more than the amounts they could receive from Social Security and pensions to assure themselves of an adequate retirement income. More recently, data from the Health and Retirement Study (HRS) has been applied in several studies of retirement income adequacy. In general, the adequacy debate continues, with researchers interpreting the data in different ways. These studies tend to focus on measuring asset (wealth) accumulation in a present value context in which retirement income sources such as Social Security and pensions are represented as asset values. The studies estimate the likely total asset accumulation at retirement by workers in their sample, and some studies may incorporate a target saving rate approach that is analogous to the replacement rate concept. Using HRS data, Gustman and Steinmeier reached positive conclusions about the retirement saving of future retirees and found pensions to be widely distributed among households. However, Mitchell and Moore, also using HRS data, concluded that the majority of households nearing retirement age will not be able to maintain current levels of consumption in retirement without additional saving. They found considerable variation in wealth across the income distribution but also wide variation in wealth among households within a given earnings level. They also found a rather low correlation of wealth to earnings. This means that low retirement saving is not strictly a low earnings phenomenon: there are high earners with low retirement wealth and low earners with relatively high retirement wealth. Mitchell and Moore’s results also suggest that although the need to save increases with higher earnings, when households are arrayed according to retirement wealth, those with the lowest wealth face significant risk of inadequate retirement income.Recent research by Engen, Gale, and Uccello provides a different interpretation of Mitchell and Moore’s results, but their findings are consistent with the conclusion that there appear to be different preferences or propensities in the population for accumulating retirement wealth and that inadequate retirement income appears to be associated with low retirement saving by a segment of the workforce.. Private Pensions: Key Issues to Consider Following the Enron Collapse, GAO-02-480T. Washington, D.C.: February 27, 2002. Social Security: Program’s Role in Helping Ensure Income Adequacy. GAO-02-62. Washington, D.C.: Nov. 30, 2001. Private Pensions: Issues of Coverage and Increasing Contribution Limits for Defined Contribution Plans. GAO-01-846. Washington, D.C.: Sept. 17, 2001. Retirement Savings: Opportunities to Improve DOL’s SAVER Act Campaign. GAO-01-634. Washington, D.C.: June 26, 2001. National Saving: Answers to Key Questions. GAO-01-591SP. Washington, D.C.: June 1, 2001. Cash Balance Plans: Implications for Retirement Income. GAO/HEHS-00- 207. Washington, D.C.: Sept. 29, 2000. Private Pensions: Implications of Conversions to Cash Balance Plans. GAO/HEHS-00-185. Washington, D.C.: Sept. 29, 2000. Social Security Reform: Implications for Private Pensions. GAO/HEHS- 00-187. Washington, D.C.: Sept. 14, 2000. Private Pensions: “Top-Heavy” Rules for Owner-Dominated Plans. GAO/HEHS-00-141. Washington, D.C.: Aug. 31, 2000. Pension Plans: Characteristics of Persons in the Labor Force Without Pension Coverage. GAO/HEHS-00-131. Washington, D.C.: Aug. 22, 2000. Social Security: Evaluating Reform Proposals. GAO/AIMD/HEHS-00-29. Washington, D.C.: Nov. 4, 1999. Integrating Pensions and Social Security: Trends Since 1986 Tax Law Changes. GAO/HEHS-98-191R. Washington, D.C.: July 6, 1998. Social Security: Different Approaches for Addressing Program Solvency. GAO/HEHS-98-33. Washington, D.C.: July 22, 1998. 401(k) Pension Plans: Loan Provisions Enhance Participation But May Affect Income Security for Some. GAO/HEHS-98-5. Washington, D.C.: Oct. 1, 1997. Retirement Income: Implications of Demographic Trends for Social Security and Pension Reform. GAO/HEHS-97-81. Washington, D.C.: July 11, 1997.
|
Although pensions are an important source of income for many retirees, millions of workers lack individual pension coverage. Only half of the nation's workers have been covered by private employer-sponsored pensions since the 1970s. Traditional reforms to the voluntary, single-employer-based pension system have limited potential to expand pension coverage and improve worker benefits. These pension reforms have concentrated mainly on improving tax incentives and reducing the regulatory burden on small employers. Furthermore, efforts to increase retirement savings by restricting the use of lump-sum distributions could limit worker participation in and contributions to pension plans. Three categories of reform--pooled employer reforms, universal access reforms, and universal participation reforms--go beyond the voluntary, single-employer private pension system. Pooled employer reforms seek to increase the number of firms offering pension coverage by creating centralized third-party administration and increasing pension plan portability. Universal access reforms seek to boost savings by offering payroll-based accounts, albeit without mandating employer contributions. Universal participation reforms would mandate pension availability and participation for all workers, similar to the existing Social Security system.
|
SBA provides small businesses with access to credit, primarily by guaranteeing loans through its 7(a) and 504 programs. SBA also makes loans directly to businesses and individuals trying to rebuild in the aftermath of a disaster, and it primarily services these loans directly. Substantially all of the disaster assistance loans have below-market interest rates and repayment terms of up to 30 years. Interest rates on disaster loans vary, depending on the borrower’s ability to obtain credit in the private sector. The President’s fiscal year 1998 budget proposed that SBA begin selling disaster and business loans that the agency was servicing and transition from servicing loans directly to overseeing private-sector servicers. Before its loan asset sales program began, SBA was servicing approximately 300,000 loans, with a principal balance of over $9 billion. About 286,000 of these loans, with a principal balance of $7 billion, were disaster assistance loans. SBA, as well as other credit agencies, is required to account and budget for its credit programs in accordance with the Federal Credit Reform Act of 1990 (FCRA). FCRA was enacted to require agencies to more accurately measure the government’s cost of federal credit programs and to permit better cost comparisons, both among credit programs and between credit and noncredit programs. The act gave OMB responsibility for coordinating credit program estimates required by the act. Authoritative guidance on preparing cost estimates for the budget and conducting loan sales is contained in OMB Circular A-11, Preparation, Submission, and Execution of the Budget. The Federal Accounting Standards Advisory Board developed accounting standards for credit programs. This guidance is generally found in Statement of Federal Financial Accounting Standards No. 2, Accounting for Direct Loans and Loan Guarantees, which became effective in fiscal year 1994. This standard, which generally mirrors FCRA and budget guidance, established accounting guidance for estimating the subsidy cost of loan programs, as well as recording loans and loan sales for financial reporting purposes. According to FCRA, the actual and expected costs of federal credit programs should be recognized in budgetary reporting. The accounting standard also requires these costs to be recognized for financial reporting. To determine the expected cost of a credit program, agencies are required to predict or estimate the future performance of the program on a cohort basis. This cost, known as the subsidy cost, is the net present value of disbursements by the government minus estimated payments to the government over the life of the loan or loan guarantee, excluding administrative costs. Figure 1 presents the cash flows included in the subsidy cost calculation for direct and guaranteed loans. FCRA established a special budgetary accounting system to record the budget information necessary to implement credit reform. Loans and loan guarantees made on or after October 1, 1991—the effective date of credit reform—use the (1) program and (2) financing accounts to handle credit transactions. The program account is included in budget totals, receives separate appropriations for the administrative and subsidy costs of a credit program, and records the budget authority and outlays for these costs. The program account is used to pay the associated subsidy cost to the financing account when a direct or guaranteed loan is disbursed. The financing account, which is nonbudgetary, is used to collect the subsidy cost from the program account, borrow from Treasury to provide financing for loan disbursements, and record the cash flows associated with direct loans or loan guarantees over their lives, including loan disbursements, default payments to lenders, loan repayments, interest payments, recoveries on defaulted loans, and fee collections. Figure 2 shows the flow of program and financing accounts transactions for a direct loan program. FCRA requires that the rate of interest charged by Treasury on lending to financing accounts be the same as the final discount rate used to calculate the net present value of cash flows when estimating the subsidy cost of a credit program. The final discount rate for a cohort of loans is determined based on interest rates prevailing during the period that the loans are disbursed. Once the loans for a cohort are substantially disbursed (at least 90 percent), the final discount rate for that cohort is determined, and this rate is to be used for financing account interest calculations. The same rate is required to be used to calculate subsidy costs and interest on the financing account, so that the financing account will break even over time as it uses its collections to repay its Treasury borrowing. OMB provides tools for agencies to use to calculate interest on the financing account. To estimate the cost of credit programs, agencies first estimate the future performance of direct and guaranteed loans, using cash flow models, when preparing their annual budgets. The data used for these budgetary estimates are generally updated or “reestimated” annually as of the end of the fiscal year to reflect any changes in loan performance since the estimates were prepared, as well as any expected changes in assumptions related to future loan performance. Increases in subsidy costs that are recognized through reestimates are funded through permanent indefinite budget authority. Before SBA could proceed with a loan sale, OMB had to approve it. This approval was based primarily on whether or not the sale was expected to be financially beneficial to the government, meaning that the estimated proceeds were expected to be greater than the estimated value of holding the loans. SBA estimated the current value to the government of holding the loans, also known as the “hold value,” in accordance with OMB Circular A-11. The hold value is the expected net cash flows from the loans, discounted at current Treasury rates. This differs from the net book value recorded on SBA’s books, which is the expected net cash flows from the loans discounted using Treasury rates in effect when the loans were disbursed. Therefore, the hold value takes into account changes in interest rates since the loans were disbursed, whereas the net book value does not. Our January 2003 report on SBA’s first five loan sales highlighted accounting anomalies related to its disaster loans and loan sales program. Specifically, SBA incorrectly calculated the accounting losses on the loan sales it disclosed in its financial statements and lacked reliable financial data to determine the overall financial impact of the sales. Further, because SBA did not analyze the effect of loan sales on its remaining portfolio, its reestimates of loan program costs for the budget and financial statements may have contained significant errors. In addition, SBA could not explain significant declines in its subsidy allowance for disaster loans. In response to our findings, SBA took immediate action to begin the process of identifying the deficiencies that contributed to the disaster loan accounting anomalies, including unexplained significant declines in its subsidy allowance. A team of financial experts, including contractors and staff from the Office of the Chief Financial Officer, was assembled to conduct detailed reviews of financial records and systems related to the disaster and loan sales programs. Several diagnostic-type analyses were performed, including detailed reconciliations of the subsidy allowance and testing of alternative versions of the cash flow model used to estimate the cost of the disaster loan program. In January 2003, SBA hired IBM Business Consulting Services (IBM) to help determine reasons for the abnormal balance in the disaster loan program’s subsidy allowance and to identify recommendations for correcting any deficiencies noted. IBM assisted SBA in a detailed review of SBA’s accounting and budgeting for the disaster loan program and its loan sale procedures. IBM summarized the results of this review in a March 2003 report. According to SBA officials, the core diagnosis of the problems was completed by April 2003 with the submission and analysis of the report from IBM. SBA and its contractors identified four key deficiencies related to the disaster loan program accounting anomalies. These were (1) major flaws in the cash flow model used to estimate the cost of the disaster loan program, (2) errors and inconsistencies in the model used to determine whether sales were beneficial, (3) incorrect loan values used to calculate the results of loan sales, and (4) inconsistencies between the interest rates used to estimate subsidy costs and the interest rates used to determine interest payments to Treasury. The methodology SBA’s cash flow model used to estimate costs for the disaster loan program assumed that a single illustrative loan with characteristics based on overall portfolio averages could serve as a proxy for all loans and reasonably estimate cash flows for an entire cohort of loans, which included sold and unsold loans. This methodology could have produced reasonable cash flow estimates, even considering loan sales, if all loans had similar characteristics. However, this was not the case, and flaws in this methodology became apparent once SBA began substantial loan sales and the loans sold had different characteristics than the loans not sold. For example, the sold loans tended to have longer borrower repayment periods, or loan terms, than the loans not sold. Therefore, the sold loans would have had subsidized interest for a longer period of time and would have cost more. Because the single illustrative loan did not take into consideration these differences when estimating cash flows, the model had problems reestimating the cost of the program. In addition to the basic flaw in the methodology, SBA incorrectly calculated the single illustrative loan’s average loan-term assumptions used in the cash flow model. SBA estimated costs of the disaster loan program using average loan-term assumptions of 16 years for business disaster loans and 17 years for home disaster loans. In our January 2003 report, we raised concerns about the validity of these assumptions, as our review of disaster loans sold indicated an average loan term of about 25 years. SBA’s loan- term assumptions were based on the number of loans, rather than the dollar value of loans disbursed, and therefore was a straight average rather than a weighted average. However, SBA found that, for all disaster loans, the average loan term, based on the dollar value of loans disbursed, was 23 years. Therefore, the model, which used loan terms of 16 and 17 years, did not consider cash flows for the full term of the loans. Given that the borrower interest rates on these loans were generally below market rates and less than SBA’s cost of borrowing from Treasury to finance its loans, understating the loan term also understated the program costs. According to SBA, this caused the model to underestimate the cost of the disaster loan program by 6 to 7 percent. Further, during the reestimate process, SBA did not update the estimated principal and interest with actual collection amounts, which resulted in inaccurate data being used to calculate costs. These flaws related to the methodology, loan-term assumptions, use of inaccurate data, and other problems, resulted in unreliable subsidy cost estimates and reestimates for the disaster loan program. Collectively, the problems with SBA’s cash flow model resulted in significant underestimates of the cost of the program, which was ultimately reflected in the negative balance in the disaster loan subsidy allowance. The negative balance would occur for programs that are expected to be profitable, which would not be expected for the disaster loan program, or when the allowance is overspent, meaning that the program cost far more than estimated. When SBA sold loans, it used another model, called the hold model, to estimate the value to the government of holding the loans scheduled for sale until they were repaid, either at or before maturity. The hold model considered the same possible cash flows as the cash flow model used to estimate the cost of the program, including principal and interest collections, prepayments, delinquencies, defaults, and recoveries. However, the hold model differed from the cash flow model because it was constructed using a different methodology. The hold model measured loans individually whereas the cash flow model, as previously discussed, used a single-loan approach. In addition, expected defaults were determined differently, which caused the hold model to produce higher default rates than the cash flow model. Further, the hold model used economic variables and performance indicators and the cash flow model did not. As a result of these differences, the two models produced different results. While the hold model’s conceptual design was superior to the cash flow model, it too contained serious flaws that produced misleading results. For example, the hold model erroneously used assumptions in determining the amount of recoveries expected on defaulted loans. The recovery assumptions were taken from the cash flow model used to estimate the cost of the program. These assumptions were calculated based on actual recoveries as a percentage of the value of loans disbursed and, therefore, should have been applied based on the value of loans disbursed. The hold model, however, erroneously applied this percentage to estimated defaulted loan amounts, therefore calculating a far lower amount of estimated recoveries than was appropriate, which made the value of holding the loans seem much lower. Collectively, problems with SBA’s hold model caused it to undervalue the loans it sold by about 30 percent, according to disclosures in SBA’s fiscal year 2003 financial statements. As a result, at the time of the sales the hold model indicated that it was financially beneficial to sell the loans when in fact it was not. While SBA’s hold model indicated at the time of the sales that SBA had gains on loan sales, SBA concurrently disclosed losses on loan sales in its financial statements based on yet another set of flawed calculations. As reported by us in our January 2003 report and in the report issued by IBM in March 2003, when SBA calculated the results of loan sales for purposes of its financial statements, it incorrectly estimated the portion of the subsidy allowance to allocate to each loan sold in order to calculate the value on its books for the loans it had sold (net book value). For example, when calculating the net book value for the disaster loans that were sold, SBA did not allocate a portion of the subsidy allowance for financing costs associated with lending to borrowers at below-market interest rates. Given that a large portion of the subsidy cost was related to providing below- market borrower interest rates, this omission resulted in a significant overestimate of the net book value of the loans sold and, therefore, a significant overestimate of the losses SBA disclosed in its financial statements related to the sale of its disaster loans. Even though SBA calculated losses for the financial statements, it still operated on the premise that loans were sold at gains when considering changes in interest rates, which the hold model was purportedly designed to do. The final deficiency that SBA identified in its disaster loan program accounting related to inconsistencies in the interest rates it used to estimate its subsidy costs versus those used to calculate its interest payments to Treasury. A direct loan program, including the disaster loan program, funds its lending to borrowers with the subsidy cost it receives through appropriations and from borrowing from Treasury. Because the borrowing is expected to be repaid with collections from borrowers, borrowing is not a budgeted cost to the program and is accounted for in the program’s financing account. FCRA requires that the rate of interest charged to the financing account on the agency’s borrowing be the same as the interest rate used to discount cash flows (discount rate) when estimating the subsidy cost for a program. The equality of these rates is fundamental to achieving the proper balance in the financing account. If subsidy cost calculations are accurate and the proper interest rates used, the financing account will break even over time as it uses collections from borrowers to repay Treasury borrowings. SBA, in coordination with OMB, found that the tools provided to agencies to calculate interest for the financing account did not adjust the amount of interest paid by the financing account while the loans were disbursing. The discount rate used to estimate the subsidy cost is not final until the loans in a cohort are substantially disbursed (at least 90 percent), which, for the disaster program, generally may take at least 2 years. When the loans are substantially disbursed, and the final discount rate is fixed, the reestimate process retroactively adjusts the subsidy costs to reflect the final discount rate. SBA and other agencies must make annual interest payments to Treasury while the loans are disbursing, although the final interest rate has not yet been determined. Thus in the early years of a cohort, before the loans are substantially disbursed, an interim interest rate is used to calculate interest payments. However, the tools provided by OMB to calculate interest between the financing account and Treasury do not retroactively adjust prior interest earnings or payments to reflect the final interest rate. This failure to adjust prior interest payments to reflect the final interest rate resulted in excess payments to Treasury and an insufficient balance in SBA’s financing account and subsidy allowance, since the interest payments impact both. This omission in the tools OMB provides to all agencies that disburse or guarantee loans could result in a disconnect between the amounts required to be earned from or paid to Treasury to make the financing account whole, and the actual amounts earned or paid. Consequently, agencies’ financing account balances and subsidy allowance may be over- or understated. Following its analysis and identification of deficiencies, SBA developed a new cash flow model to estimate the cost of the disaster loan program. This improved the reliability of the disaster program cost estimates and corrected the abnormal balance in the subsidy allowance for the disaster loan program. In addition, SBA analyzed its prior interest payments to determine the effect of using inconsistent interest rates to calculate its estimated subsidy costs and interest payments to Treasury, and implemented a different approach to reestimate program costs. These corrective actions helped SBA achieve an improved audit opinion on its fiscal year 2004 and restated fiscal year 2003 financial statements. In fiscal year 2003, SBA’s contractor developed a new cash flow model to calculate subsidy cost estimates and reestimates for the disaster loan program. In contrast to the prior model’s flawed single loan approach to estimate the cash flows, the new model was designed to estimate cash flows individually for each loan. This design facilitates calculating loan values for loans sold to determine gains or losses on loan sales and, if SBA schedules additional loan sales, could also be used to calculate loan values for determining whether these sales would be financially beneficial to the government, thus negating the need for a separate hold model. Application of the model enabled SBA to retroactively determine the results of its prior loan sales and to correct the abnormal balance in its subsidy allowance. During the development of the new cash flow model, SBA analyzed the available disaster loan data, including loan performance information, loan terms, disaster type and magnitude, and regional information, as well as certain economic data, such as unemployment, gross domestic product, and interest rates. Based on these analyses, the data that best predicted default and prepayment behavior—two important cash flows for the disaster loan program—were selected to use as variables in the model. The model segments the loan portfolio into groups of loans based on the final variables selected, which were (1) the age of the loan, (2) the type of borrower (home or business), (3) the size of the loan, (4) the type of disaster loan (economic injury or physical damage), and (5) the length of the grace period. Based on these variables, there are a total of 162 groups of loans used to segment the disaster loan portfolio. On a loan-by-loan basis, the cash flow model estimates the expected principal and interest payments based on loan contract terms. Then the model estimates deviations from these expected payments for delinquencies, charge-offs, and prepayments. These deviations are calculated based on historical averages of loan performance for each group of loans. Lastly, the model estimates recoveries on charged-off loans based on historical averages. The model’s methodology is based on the assumption that the behavior of loans in the future, taking several important loan characteristics into account, will be similar to loans in the past. However, as discussed later, if future loans are made to substantially different types of borrowers, such as those with better or worse financial strength, or have substantially different loan terms, changes to the model would be required to correctly consider these new characteristics in the cash flow estimates. Throughout the development of the model, SBA documented several analyses of the performance and characteristics of its disaster loans and the model’s ability to predict loan performance. In addition to its own analyses, SBA contracted with Ernst & Young LLP (E&Y) to conduct an independent review of the model. E&Y reviewed the model documentation and computer code, and reviewed SBA’s testing and validation analyses. E&Y summarized its observations and findings in two reports issued in November and December 2003. E&Y noted that the model can be expected to perform reasonably well for reestimates of existing loans and to produce stable estimates over time given the model’s emphasis on long- term averages. In addition, E&Y stated that given the limitation of what can be known about future loans, the model takes a reasonable approach to estimating costs of future loans for budget purposes. E&Y also noted that the model achieves SBA’s objective to consistently value individual loans for reestimates and loan sales. Based on our review of the model, its documentation, and the reports issued by E&Y, we concluded that the new model provides a sound basis to estimate costs and improved SBA’s ability to prepare more reliable and reasonable cost estimates for the disaster loan program. When SBA used this new model for the first time to reestimate the cost of the disaster loan program, it resulted in a reestimate indicating increased costs of over $1 billion as of the end of fiscal year 2003. As shown in figure 3, the adjustment to increase these costs on SBA’s books helped bring the disaster loan program’s subsidy allowance to a positive balance and more in line with expectations for this type of subsidized program. In SBA’s fiscal year 2004 financial statements, the subsidy allowance was reported to be about $613 million, or about 20 percent, of the $3 billion outstanding balance of the disaster loan program. Given that the estimated cost of the program generally ranges from $16 to $36 for every $100 that SBA lends, this balance is within the expected range. SBA also used the new model to recalculate the results of its prior disaster loan sales and determined that the sales resulted in losses, or increased budgetary costs, of over $900 million. To resolve the inconsistency in the interest rates used to calculate interest on its financing account borrowing from Treasury and the interest rates used to discount cash flows when estimating subsidy costs, SBA completed a detailed analysis of its interest transactions with Treasury. SBA recalculated what its interest payments would have been based on the final (rather than the interim) interest rates and determined that it overpaid Treasury by about $128.6 million and $5.6 million as of the end of fiscal years 2003 and 2004, respectively. These amounts were included in SBA’s reestimates for the disaster loan program and corrected SBA’s interest transactions for the fiscal years 1992 through 2003 cohorts. Also in fiscal year 2004, SBA implemented a new approach to reestimate costs, called the balances approach. Because the balances approach determines the amount of the reestimate based on a comparison of resources in the financing account and expected future cash flows, the approach automatically adjusts the financing account and subsidy allowance for any inconsistency in interest rates going forward. Other agencies that disburse or guarantee loans would also be affected by this interest rate inconsistency, which could result in misstatements in their accounts. However, the significance of this issue cannot be determined without extensive analyses similar to SBA’s analysis because a number of factors influence how the inconsistency would impact other agencies’ accounts, including balances in financing accounts and the length of time a program takes to substantially disburse its loans. OMB has notified agencies of the flaw in the tools and outlined plans to issue a comprehensive revised reestimate tool that resolves this problem. Until updated tools are provided, agencies will continue to make incorrect interest payments that could result in financing accounts having excess or insufficient funds and misstatements in financial statement reporting accounts for credit programs. SBA’s corrective actions helped it achieve an improved audit opinion for its fiscal year 2004 financial statements. Earlier, SBA’s auditor withdrew its unqualified opinions on SBA’s fiscal years 2000 and 2001 financial statements and issued a disclaimer of opinion for fiscal year 2002, in part, because of issues identified in our January 2003 report. While progress was made in addressing these issues during fiscal year 2003, the auditor also issued a disclaimer of opinion on SBA’s fiscal year 2003 financial statements. The auditor reported that because SBA was late in completing reestimates and preparing its financial statements, among other things, the auditor did not have adequate time to resolve reservations related to SBA’s disaster loan program, including abnormal balances in the subsidy allowance and a difference in interest rates SBA used to estimate subsidy costs and calculate interest payments to Treasury. Subsequently, SBA continued to implement its corrective actions, which the auditor assessed as part of the fiscal year 2004 financial statement audit. SBA received a mixed opinion—a combination of unqualified and qualified opinions—on its fiscal year 2004 financial statements, which represented an improvement over the disclaimer it received for fiscal year 2003. In addition, SBA received an unqualified opinion on its restated fiscal year 2003 balance sheet. SBA’s auditor did not cite any issues related to previously identified problems with the disaster program or the new disaster cash flow model in these audit opinions. In addition to implementing the corrective actions to resolve the accounting anomalies, SBA also implemented new policies and procedures to help ensure that future loan program cost estimates will be reasonable, including (1) the development and implementation of new standard operating procedures for calculating reestimates; (2) the preparation of documentation to support the rationale and basis for key aspects of the cash flow model; (3) a process to coordinate the preparation of cost estimates between budget, accounting, and program staff; and (4) a revised reestimate approach. However, additional documentation of the new cash flow model would help ensure proper operation and maintenance of the model. Further, over time it will be important for SBA to continue to assess the model’s ability to predict loan performance. In addition, there may be opportunities to improve the model, as well as simplify the estimation process, that warrant further consideration by SBA. Lastly, additional procedures to test the disaster data used in the model will help ensure their reliability. During fiscal years 2003 and 2004, SBA enhanced its policies and procedures by implementing several of the internal control practices identified in federal accounting guidance that will help it ensure that future cost estimates are reliable and reasonable. SBA developed and implemented standard operating procedures for calculating its reestimates based on federal accounting guidance. These procedures established an internal review process and standardized steps that must be performed and documented as part of the reestimate process. Steps included ensuring that the correct cash flow model files are used, verifying that the model appropriately reflects the program’s structure, documenting any technical changes to the model, updating actual data and estimated cash flows in the model, and reviewing the reasonableness of the estimated cash flows. The procedures also call for the cash flow model to be reviewed by an outside party. The supporting documentation for the reestimates was provided to SBA’s financial statement auditor during the fiscal year 2004 audit. The auditor noted in its report on internal controls that the adherence to a set of standard operating procedures for calculating reestimates, along with improved documentation and an effective internal review process, were critical to SBA’s success in meeting key milestone dates and completing the audit process within accelerated financial reporting deadlines. SBA also established other important internal control practices identified in the guidance. During the development of the new cash flow model, SBA documented key analyses and decisions regarding the model’s methodology. For example, SBA compared the characteristics of the loans sold to the loans kept and developed an approach within the new model to take those differences into account when estimating loan performance. It also documented the basis for selecting the model’s methodology and variables, the assumptions and calculations in the model, and results of testing the model’s ability to predict cash flow estimates. This documentation helps support the rationale and basis for key aspects of the model that provide important cost information for budgets and financial statements. SBA has also established procedures to coordinate the preparation of cost estimates among budget, accounting, and program offices. This will help ensure that the estimates are reviewed and prepared with the proper information. Further, as previously discussed, SBA implemented the balances approach for reestimates. This approach will help ensure that SBA’s account balances are in line with expected future cash flows. These practices and the other practices discussed above will help ensure that anomalies such as those we identified during our last review do not go undetected or uncorrected. While SBA resolved its accounting anomalies related to the disaster loan program and made important improvements to its policies and procedures, we found that additional enhancements to internal controls would help ensure the long-term reliability of future cost estimates. Further, strengthening internal controls will help SBA identify potential problems in the future and sustain the progress it has already made. Even though SBA completed substantial documentation for the new cash flow model, we found that this documentation was not sufficient to readily provide for knowledge transfer between staff and contractors to help ensure proper maintenance and updating of the model. For example, the documentation does not specify what is done to prepare the data for use in the model and does not always clearly indicate the data sources. In addition, SBA’s documentation to explain the files used to run the model and update the data used in the model was not complete. For example, in SBA’s documentation of the files and steps used to run the cash flow model, out of a total of 19 steps, 6 steps had no explanations of the process and another 2 were not complete and indicated that someone who was no longer employed at SBA was to provide the information. Improved documentation is particularly important because SBA relied on a contractor to help develop the new cash flow model for the disaster loan program and has recently experienced significant turnover in staff responsible for preparing cost estimates. Without complete and detailed documentation on how to maintain the model, update it with additional data, and run it, it will be more difficult for current SBA staff to fully understand the model, which could result in future errors in the cash flow estimates. Thorough documentation of the model is even more important given the complexity associated with its calculation process. E&Y noted in its review, and we agree, that the model’s complexity creates an ongoing challenge related to transparency and maintenance. Because complexity increases the risk of errors occurring, SBA could benefit from continuing to evaluate whether there are opportunities to simplify the estimation process with model revisions or alternative estimation methodologies. There may also be opportunities to improve the model with additional variables. When estimating loan performance, the new model does not use data related to the financial strength of borrowers. Because this kind of information has been shown to be useful in predicting loan performance, such as defaults and prepayments, incorporating this type of information could improve the model’s estimates. Further, additional detailed data on borrower financial strength and loan collateral, among other things, may improve the model’s effectiveness for supporting any future loan sales. According to SBA officials, beginning in fiscal year 2003 SBA began collecting credit scores for disaster loan borrowers. Once these newer loans have sufficient historical data, SBA will be able to evaluate the usefulness of these data as a potential variable to predict loan performance. In addition to opportunities to improve the model, SBA could also enhance its procedures to ensure that the model’s estimates reasonably predict future loan performance. While SBA has completed testing of the model’s ability to predict loan performance, it is important that SBA establish a process to help ensure this testing continues routinely and that causes of any significant variances are identified and addressed. For example, as stated earlier, if future loans are made to substantially different types of borrowers or have substantially different loan terms, changes to the model would be required to correctly consider these new characteristics in the cash flow estimates. Routine testing would help identify this type of change. While the new cash flow model provides SBA with a sound approach to estimate costs for the disaster loan program, additional verification procedures would provide better assurance that data used by the model are reliable. Federal accounting guidance requires agencies to accumulate sufficient, relevant, and reliable supporting data that provide a reliable basis for estimates of future loan performance. Because SBA’s old cash flow model used data from SBA’s Main On-Line System for Tracking Evaluation and Response (MONSTER) database which contains summary information, most of its detailed data reliability assessments and reconciliation practices revolved around MONSTER. For example, SBA maintains a reconciliation that tracks loans in MONSTER with its general ledger at a cohort level. However, the new model uses data from MONSTER and loan-level data from SBA’s Electronic Loan Information Processing System (ELIPS) database. SBA officials indicated that plans are to continue to move away from using MONSTER. While SBA routinely reconciles its ELIPS database at a high level, as it moves toward using ELIPS data for estimating its disaster program costs, it is important that SBA reconcile and test the data at the level used in the model. SBA took prompt action to identify the deficiencies related to its disaster and loan sale programs with a comprehensive review of its financial records. The corrective actions it then took established a basis for reliable and reasonable cost estimates. At the same time, the complexities associated with estimating costs for these programs will require continued attention. Without enhancements to the model’s documentation, additional procedures to test data reliability, and continued testing and analysis of the model, SBA may find it difficult to fully sustain the progress it has made. Further, improved tools from OMB would help SBA and other agencies ensure proper calculation of interest costs related to their credit programs. We are making five recommendations to SBA and two to OMB. To help ensure that future subsidy cost estimates are reliable, we recommend that the SBA Administrator take the following five actions. Develop additional documentation of the new disaster cash flow model to help facilitate proper operation, maintenance, and updating of the model. Study the value of incorporating additional variables in the new disaster cash flow model, such as detailed information on the financial strength of borrowers. Establish policies and procedures to routinely test the new disaster cash flow model’s ability to predict loan performance by comparing the model’s predictions to actual loan performance and to identify and address the causes of any significant variances. Consider possible revisions to the model and/or alternative methodologies that would simplify the estimation process. Establish additional procedures to test and document the reliability of the data used in the new cash flow model for the disaster loan program. To help ensure that agencies make correct interest calculations for financing accounts, we recommend that the OMB Director take the following two actions. Update the tools provided to agencies for adjusting financing account interest transactions once a final interest rate is determined for a cohort. Provide instructions to agencies on making retroactive corrections to financing account interest transactions based on final interest rates for a cohort. In written comments reprinted in appendix II, SBA stated that these were appropriate recommendations and that it already has work underway to address several of them. In written comments reprinted in appendix III, OMB agreed with our recommendations and stated that it would work with agencies to correct interest transactions with Treasury. SBA and OMB also provided technical comments, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to the Ranking Minority Member of the Senate Committee on Small Business and Entrepreneurship, other appropriate congressional committees, the Administrator of the Small Business Administration, and the Director of the Office of Management and Budget. Copies will also be made available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9508 or [email protected]. Major contributors to this report are acknowledged in appendix IV. To describe the nature of the deficiencies SBA identified that contributed to the disaster loan program accounting anomalies, we reviewed the report prepared by IBM that summarized the detailed review of SBA’s accounting and budgeting for its disaster loan program and its loan sale procedures performed by SBA and IBM, and a variety of documents prepared by SBA that summarized the issues found. We also reviewed OMB Circular A-11 and FCRA to determine the criteria for calculating interest payments to Treasury on borrowing. We also interviewed SBA and OMB officials, SBA’s financial statement auditor, and a contractor with SBA. To identify the corrective actions taken and assess whether these actions resolved the identified deficiencies, we: obtained and assessed the new cash flow model used to estimate the cost of the disaster loan program and its various supporting documentation; obtained an understanding of how the model works by reviewing SBA’s summary of the behavior equations, the production system documentation, and assumptions made in the model; analyzed the model’s methodology, including choice of statistical technique and variables included in the analysis, and determined that they were appropriate and reasonably related to the prediction of cash flows of disaster loans; replicated certain components of the model, such as the process used to segment the portfolio into groups of loans and predict future loan performance; assessed SBA’s logistic regression used to determine and support the variables used in the model to verify that the variables selected were statistically significant; reviewed (1) SBA’s testing of the cash flow model for biases; (2) SBA’s comparison of the loans sold and the loans kept; (3) E&Y’s reports summarizing its independent reviews of the model, including procedures performed, findings, and observations; and (4) SBA’s financial statement auditor’s assessment of SBA’s model, its reestimates, and its revised loan sale loss calculation; obtained SBA’s analysis of its interest payments to Treasury and verified reviewed SBA’s fiscal year 2004 financial statements summarizing the implementation of the balances approach to reestimate costs; interviewed SBA officials, an SBA contractor, and SBA’s financial interviewed OMB officials to obtain an understanding of their efforts to update the tools agencies use to calculate interest on financing account balances. To determine whether SBA’s new cash flow model and procedures for the disaster loan program provide a reasonable basis for future subsidy cost estimates, we interviewed SBA officials and a contractor to obtain an understanding of SBA policies and procedures for estimating subsidy costs. We reviewed supporting documentation related to its procedures, including the standard documentation template used to support its reestimates for its fiscal year 2004 financial statements, the various documentation prepared to support the model, and the reliability of data from SBA’s computer systems. Based on SBA’s procedures and documentation, we assessed the sufficiency of SBA’s estimation process based on federal accounting guidance that identifies internal control practices that help ensure that future cost estimates are reliable and reasonable. We also reviewed SBA’s financial statement auditor’s reports on internal controls for fiscal years 2003 and 2004. The checking of key components, along with our review of SBA’s documentation and E&Y’s evaluation of the model, provided a sufficient level of understanding to conclude on its approach and ability to produce more reliable and reasonable cost estimates for the disaster loan program. To identify any additional steps SBA could take to improve the long-term reliability of its model, we considered additional types of variables that might enhance SBA’s approach. As part of this analysis, we reviewed academic literature on default modeling and discussed alternative variables and modeling techniques with the contractor SBA used to develop the model. Based on these assessments, our assessment of the model, and E&Y’s findings and observations on the cash flow model, we identified opportunities SBA could explore to enhance its procedures to improve the long-term reliability of its cost estimates. We provided SBA a draft of this report and OMB a draft of applicable sections of this report for review and comment. SBA and OMB provided written comments, which are reprinted in appendix II and III, respectively. They also provided technical comments, which we have incorporated as appropriate. We performed our work in accordance with generally accepted government auditing standards in Washington, D.C. from April 2004 through March 2005. In addition to the above, Marcia Carlsen, Lisa Crye, Austin Kelly, Beverly Ross, Kara Scott, and Brooke Whittaker made key contributions to this report.
|
In response to a January 2003 GAO report that identified significant anomalies in the Small Business Administration's (SBA) disaster loan accounts and raised serious concerns about its ability to account for loan sales and estimate program costs, SBA conducted an extensive analysis to identify causes of the anomalies and implemented a number of corrective actions. In light of SBA's actions, GAO undertook a follow-up review to (1) describe the nature of the deficiencies SBA identified, (2) determine whether its corrective actions resolved the deficiencies, and (3) assess whether its procedures provide a reasonable basis for future credit estimates. SBA took prompt action with a comprehensive review of its financial records and systems to identify the deficiencies related to accounting for its disaster loans and loan sale program. SBA's review found (1) the cash flow model used to estimate the cost of the disaster loan program was unreliable and underestimated the cost, (2) the model used to determine whether sales were beneficial had errors and incorrectly indicated that loans were sold at gains, (3) incorrect loan values used to calculate the results of loan sales led to inaccurate reporting in SBA's financial statements, and (4) incomplete tools provided by OMB to calculate interest payments on borrowings from Treasury resulted in excess payments to Treasury and an insufficient balance in SBA's financing account and subsidy allowance. To resolve these deficiencies, SBA implemented a number of corrective actions during fiscal years 2003 and 2004. To address the first three, SBA developed a new cash flow model to estimate the costs and loan values for the disaster loan program. This improved the agency's ability to prepare more reliable cost estimates and determine the gain or loss on prior loan sales. To address the fourth deficiency, SBA analyzed its interest payments to Treasury and found that it had overpaid by about $134 million. SBA included this amount in its reestimates for the disaster loan program to correct prior interest payments and also implemented a different approach to update or "reestimate" its cost estimates, which will adjust its transactions with Treasury going forward. However, until OMB updates its tools for computing these interest payments, other credit agencies may also be over- or underpaying interest to Treasury. Further, SBA improved its policies and procedures to help ensure that future loan program cost estimates will be reasonable. For example, SBA implemented new standard operating procedures for calculating reestimates and prepared documentation to support the rationale and basis for key aspects of the cash flow model. However, because of the complexities associated with estimating loan program costs, additional actions by SBA would help improve the long-term reliability of cost estimates. These include (1) further documentation of the model and disaster data to readily provide for knowledge transfer between staff and contractors to help ensure proper maintenance, updating, and running of the model; (2) periodic assessments of the model's ability to predict loan performance; and (3) additional procedures to ensure the disaster data used in the model are tested to verify and document that they are reliable. In addition, there may be opportunities to improve the model with additional variables, such as financial strength of borrowers, as well as revisions to simplify the estimation process that warrant further consideration by SBA.
|
The Department of Defense Education Activity (DODEA) oversees all DOD schools in the United States and abroad. The Department of Defense Dependents School System (DODDS) is the entity within DODEA that manages DOD’s overseas schools. In school year 2001-02, DODDS operated 155 schools in 14 countries (see figs. 1 and 2) and employed roughly 6,200 educators, including both traditional classroom teachers and instructional staff, such as school psychologists, nurses, and counselors. Classroom teachers comprise over 90 percent of all DOD overseas educators. They are represented by two different teachers’ unions: the Federal Education Association (FEA) and the Overseas Federation of Teachers (OFT). Although classroom teachers and instructional staff are paid on different salary schedules, both groups are subject to the same salary determination and payment process. Legal requirements and union arbitration agreements form the basis for the DOD overseas teachers’ salary determination process. Prior to 1959, teachers in DOD overseas schools were paid according to the General Schedule, the standard pay schedule for many federal government employees. These salaries did not reflect teachers’ academic backgrounds or qualifications. As a result, DOD overseas teachers’ salaries were significantly lower than those paid to public school teachers in the United States. Congress attempted to remedy these inequities in 1959 by passing the Defense Department Overseas Teachers Pay and Personnel Practices Act (Pay and Personnel Practices Act). This law directed the heads of each military department in DOD to fix rates of basic compensation “in relation to the rates of basic compensation for similar positions in the United States.” However, these rates of compensation could not exceed the highest rate of basic compensation for similar positions of a comparable level of duties and responsibilities under the municipal government of the District of Columbia. Upon passage of the Pay and Personnel Practices Act, DOD officials met with representatives of the Overseas Education Association (OEA) and the National Education Association (NEA) to develop procedures governing its implementation. In 1960, these parties agreed to establish an annual review of compensation schedules as compared to the rates of compensation in urban school jurisdictions with populations of 100,000 or more. Although all parties agreed to this process, annual per-pupil spending limitations enacted by Congress effectively lowered the compensation paid to DOD overseas teachers below the salary schedule devised through the annual review. To correct this problem, Congress amended the Pay and Personnel Practices Act in 1966 and set into law the procedures that DOD and the teachers’ associations had agreed to in 1960. The amendment provides that DOD fix the basic compensation for overseas teachers at rates equal to the average of the range of rates of basic compensation for urban school jurisdictions with populations of 100,000 or more. Since 1966, the DOD overseas teachers’ salaries have been the subject of numerous legal actions. Among the most significant for their impact on DOD’s salary determination and payment process are a class action law suit in 1973 and an arbitration decision in the early 1980s. In 1973, seven DOD overseas teachers sued the U.S. government, claiming that DOD’s methods for determining teacher salaries were inconsistent with the Pay and Personnel Practices Act. Specifically, the teachers argued that DOD’s process of determining teacher salaries based on the previous year’s salaries in U.S. school jurisdictions resulted in salaries unequal to those paid to teachers in the United States. The court ruled that timing was an essential component of compensation and that, therefore, salaries used for comparison purposes should be from the same school year. The result of this court case was the establishment of the payment system that DOD currently uses to determine and distribute salary payments to DOD overseas teachers. In 1982, an arbitration decision was issued, which resolved a grievance the OEA filed relating to the salary schedule that had been set for school year 1979-80. In part, the OEA contested DOD’s use of an August 1, 1979, cut-off date for salary data because it excluded the salary increases that many U.S. school teachers received in the second half of the school year. The arbitrator held that by using the August 1 date, DOD did not meet the statutory requirement that it set salaries “equal to the average of the range of rates” of the group of teachers identified in the statute. Subsequently, DOD and OEA reached an arbitration agreement, which requires DOD to collect salary information for its annual survey through at least January 10 of each school year. The Department of Defense Civilian Personnel Management Service, Wage and Salary Division conducts this survey and generates the DOD overseas teachers salary schedule each year. The DOD overseas teachers’ compensation package, which includes salary, benefits, and allowances, is set by law and regulations and generally compares favorably with U.S. teachers’ compensation. Since 1966, DOD overseas teachers’ salary schedules have been set equal to average teacher salaries in school districts in incorporated places with 100,000 or more people. Their benefits are set by regulations published by the U.S. Office of Personnel and Management (OPM). DOD overseas teachers also may receive allowances determined by the U.S. Department of State and additional services, such as access to on-base gyms and social clubs. The compensation package generally compares favorably with compensation for U.S. teachers. Starting and average salaries for DOD overseas teachers are higher than those of teachers in the United States. U.S. teachers typically do not receive the allowances and services that many DOD overseas teachers receive. While the compensation package generally compares favorably with that of U.S. teachers, it appears that many teachers are dissatisfied with access to health care in many overseas locations. The Defense Department Overseas Teachers Pay and Personnel Practices Act, as amended in 1966, requires that DOD overseas teachers’ salaries be equal to average salaries in U.S. urban school districts. DOD overseas teachers are paid on a salary schedule, which reflects both their level of education and years of experience. (See table 1 for the school year 2001-02 salary schedule.) As federal civilian employees, many DOD overseas teachers are eligible for a variety of other benefits in addition to basic compensation (salary). In general, federal civilian employees are eligible to participate in the Federal Employees Health Benefits (FEHB) program and the Federal Employees Group Life Insurance (FEGLI) program and are covered by the Federal Employees’ Retirement System (FERS), which includes the Thrift Savings Plan (TSP). However, not all DOD overseas educators are eligible for these benefits. The type of appointment a teacher holds can alter the benefit package he or she receives. For example, federal employees hired as temporary employees with appointments not to exceed 1 year are not eligible for health insurance. Although DOD overseas teachers hired in the United States are mostly permanent employees and therefore eligible for all benefits, local hires (teachers residing and hired abroad) are often employed under time-limited appointments. However, local hires who are on time-limited appointments can be converted to permanent appointments once they meet all requirements, which allows them to receive full benefits. In addition, almost all local hires are spouses of military and DOD civilian personnel and thus receive these benefits indirectly through their spouses. In addition to salary and benefits, some teachers are also eligible to receive allowances such as a living quarters allowance, a post (cost-of- living) allowance, and the cost of shipment of household goods and an automobile. These additional allowances are the same as those available to other DOD civilian employees stationed overseas and similar to those available to other federal employees stationed overseas. These allowances are primarily governed by regulations set by the Department of State.DOD has some flexibility to limit these allowances, but may not exceed the scope of the regulations set by State. For instance, although State allows civilian employees overseas to receive an education allowance, the wardrobe portion of Home Service Transfer Allowance, and the wardrobe portion of the Foreign Transfer Allowance, DOD overseas teachers do not receive them. See table 2 for an explanation of each allowance available to DOD civilian employees stationed overseas. Generally, these allowances are available only to teachers who are recruited in the United States. These allowances (except post allowance and danger pay, which all teachers are eligible for, regardless of where they are hired) are not considered salary supplements or entitlements. Rather, they are intended to be recruitment incentives for U.S. citizen employees living in the United States to accept employment in foreign areas. In each of the last 2 years, over 90 percent of locally hired teachers were spouses of active duty military or DOD civilian employees. Thus, though these teachers may not be eligible for these allowances in their own right, they do receive them through their spouses. Furthermore, locally hired teachers may become eligible for these allowances if transferred to a new post. DOD overseas teachers’ salaries compare favorably to U.S. teachers’ salaries. On average, salaries for teachers in DOD overseas schools are higher than the U.S. national average teacher salary. The average salary in DOD overseas schools for school year 2000-01 was $47,460, while the national average for the same year was $43,250. On a comparative basis, the average DOD overseas teacher’s salary ranked the twelfth highest among average teacher salaries in the 50 states and the District of Columbia for school year 2000-01. (See table 3.) In the same year, the starting salary for a DOD overseas teacher with a Bachelor of Arts (BA) degree ($30,700) was 6 percent higher than the average starting salary in the United States ($28,986) for a teacher with a BA. Furthermore, if starting salaries for DOD’s overseas teachers with a BA in school year 2000-01 are included in the ranking of average, starting salaries in each state and the District of Columbia, the DOD overseas school system ranked twelfth highest. (See table 4.) While U.S. teachers generally receive similar benefits to those of DOD overseas teachers, they do not receive the allowances that overseas educators generally receive, such as the living quarters allowance. In addition to these allowances, DOD overseas teachers often have access to military base stores, which sell discounted and duty-free goods, and to recreational facilities on base, such as gyms and social clubs. Although DOD overseas teachers receive the standard health care benefit for U.S. civilian government employees, employees stationed overseas face challenges with regard to health care access. Representatives of teachers’ unions told us that there is dissatisfaction among teachers with access to health care in many overseas locations. In addition, in July 2001, the Assistant Secretary of Defense for Force Management Policy reported that “the availability and cost of medical care for DOD educators employed overseas is a significant problem.” While civilian employees are often allowed to use military treatment facilities, access to these facilities for civilian employees is on a space-available basis. Civilian employees stationed overseas, like the DOD teachers, are limited to fee-for-service insurance plans because no health maintenance organizations are available in foreign posts. Whether care is provided at military or host nation facilities, civilian employees must pay when services are rendered and request reimbursement by their medical insurance. This can often mean large out-of-pocket expenses for doctor’s visits and treatments. In addition, health care providers at military medical treatment facilities are not recognized as authorized preferred providers by the health plans available to overseas employees, so reimbursement rates are often lower than for preferred providers in the United States. Furthermore, when civilian employees must use host nation medical facilities, they often face challenges, such as differences in language, culture, and health practices. For example, a teacher may have difficulty explaining his or her medical history to a doctor who does not speak English. DOD is unable to change the health insurance available to civilian DOD employees, including the DOD overseas teachers, because their health insurance package is set by a governmentwide policy for civil servants. In general, DOD has been successful in recruiting and retaining well- qualified teachers. In school year 2001-02, DOD recruiters filled almost all vacant teaching positions in overseas schools. The DOD overseas teacher workforce is highly qualified, with virtually all DOD overseas teachers certified in the subjects or grades they teach. DOD also does not appear to have difficulty retaining teachers, although some agency officials and a representative of a teachers’ union suggested retention difficulties exist in a few specific geographic areas. In school year 2001-02, DOD recruiters filled over 99 percent of vacant classroom teaching positions. More than one agency official we spoke with confirmed that DOD has little difficulty recruiting teachers for overseas schools. This year, DOD has received approximately 8,500 teaching applications, far more than the approximately 900 teaching positions available. DOD’s success in filling vacancies appears consistent across the 10 districts in which its overseas schools are located. The lowest success rate for filling classroom teaching vacancies in school year 2001-02 was 99.77 percent (for vacancies in the Heidelberg, Germany district), while 7 of the 10 districts filled all their vacancies for that school year. The availability of teachers and the attractiveness of the DOD overseas schools to potential hires may be factors that aid recruitment. DOD has a ready supply of potential teachers living abroad. Roughly one-third of DOD overseas teachers are hired locally. In school year 2001-02, spouses of military or DOD civilian employees made up 47 percent of new hires. It is DOD policy to give them preference over teaching candidates living in the United States when applying to the system, provided that they are qualified. DOD overseas schools also have qualities that make them attractive to teachers. Representatives of teachers’ unions indicated in interviews that the excitement of living abroad combined with the familiarity of working in an American school attracts many teachers to the DOD overseas school system. In addition, DOD’s recruitment video cites the system’s competitive pay and benefits as a reason for joining the system. DOD’s vigorous recruitment program may also contribute to DOD’s success attracting applicants. Recruitment activities include job fairs; a student teaching program; advertisements in professional, military, and on-line publications; participation in the Troops to Teachers program;and on-site recruitment at college campuses. In recent years, DOD recruitment personnel have focused on enhancing the diversity of their teacher workforce. To that end, they have established student teaching agreements with Historically Black Colleges and Universities and the Hispanic Association of Colleges and Universities to attract minority applicants. As part of its recruitment efforts, DOD has also developed an on-line application system for teaching candidates in order to facilitate the application process. Since this system was made available, the number of applicants to the system has more than doubled. Another important recruitment tool is the use of advance job offers, offers made to applicants before actual vacancies have been identified and that do not specify a job location. The advance offers program is used to help DOD overseas schools compete with U.S. school districts for exceptional educators because U.S. schools tend to make job offers well in advance of the DOD overseas schools. Advance offers are also used to recruit minority teachers and increase the diversity of the DOD overseas teacher workforce. While recruitment is generally successful, agency officials and representatives of teachers’ unions have indicated that DOD experiences some difficulties recruiting teachers for certain subjects, such as special education, math, and science. It is not surprising that DOD has some difficulty recruiting teachers for these subjects. According to a 1996 report by the National Center for Education Statistics (NCES), 20-29 percent of U.S. public schools with vacancies in the subject areas of bilingual and special education, math, science, and English-as-a-Second-Language report difficulty filling them. DOD officials also report challenges filling vacancies in some locations. According to DOD officials and representatives of the teachers’ unions, areas like Japan, Korea, and Bahrain are not as attractive to teachers because the culture and language are significantly different from their own. Of the 20 substitute teachers hired to fill full-time positions by DOD in school year 2001-02, 19 were located in schools in Japan. This figure suggests that while DOD may be able to fill virtually all of the vacancies in that country, it must use some nonpermanent teachers to do so. DOD can fill positions in less desired locations by sending teachers there from other schools in the system. All teachers sign mobility agreements upon accepting permanent employment with DOD, which allows the agency to send them wherever they are needed, though administrators seek to avoid compulsory reassignment. At the same time, DOD can pay teachers recruitment bonuses, a tool that could help the agency address any recruitment difficulties. DODEA recently received authority to pay these bonuses and has not yet offered any. While it may be more difficult to recruit teachers for some subject areas and locations, DOD’s success filling vacant positions with well-qualified teachers suggests that any recruitment difficulties are relatively minor. DOD overseas teachers are well-qualified, with virtually all teachers in DOD schools certified in the subjects or grades they teach. Almost two- thirds of DOD overseas teachers hold advanced degrees, compared to 46 percent of public school teachers in the United States. Further, 73 percent of DOD teachers have at least 10 years of teaching experience. These well-trained teachers could be a major factor behind the schools’ high student-achievement level, an indication of the strength and success of the DOD overseas school system. Research has linked teacher quality to student performance. Data show that students in DOD overseas schools perform above the national average on the National Assessment of Educational Progress (NAEP) and the Terra Nova Achievement Test. For example, in 1998, only two states had a higher percentage than the DOD overseas schools of eighth graders who performed at a proficient or higher level on the writing portion of the NAEP. Notably, DOD overseas schools have made significant progress in closing the performance gap between minority and white students. Compared to state-by-state rankings of minority eighth graders in 2000, DOD minority eighth graders ranked second on NAEP math scores. Agency officials and representatives of teachers’ unions told us that, in general, DOD overseas schools do not have a problem retaining teachers. While the agency does not have sufficient data to calculate retention rates by location, agency officials we spoke with said that any retention difficulties the agency has are limited to a few geographic areas, such as Korea, Japan, and Bahrain. In addition, union representatives told us that teachers who join DOD’s overseas school system generally tend to stay in the system for many years. Because DOD is consistently able to fill vacant positions with well-qualified teachers, any retention difficulties that exist do not appear to threaten the quality of the teacher workforce. DODEA recently obtained authorization to offer retention bonuses to teachers, a tool that could be used to address these difficulties. The agency has not yet offered any such bonuses. DOD has developed a process for determining and paying overseas teachers’ salaries to meet the requirements of the law and subsequent court cases and arbitrations. DOD’s process for collecting salary information and issuing a new salary schedule for DOD overseas teachers takes roughly 8 months. Once the new salary schedule is set, DOD must pay teachers their annual salary increases, and some allowance increases, retroactively. Teachers typically receive these retroactive payments near the end of the school year. The process for recalculating the teachers’ salaries and paying them retroactively causes some administrative burden for the agency, in terms of both workload and cost. Each year, in order to meet legal requirements, the DOD Wage and Salary Division surveys urban school districts for salary data through at least January 10. It identifies these urban school districts by using the Census Bureau’s list of incorporated places with populations of 100,000 or more. For school year 2001-02, the division surveyed 230 school districts. It began planning in August, mailed out surveys in October, and continued data collection—including follow-up calls—through March. The data collection includes information on the minimum and maximum salary paid to a teacher with a BA degree, the minimum and maximum salary paid to a teacher with a Ph.D. degree, the number of pay lanes, the number of regular and longevity steps, and the number of days in the school year. With these data, the Wage and Salary Division calculates a schedule of earnings for DOD overseas teachers. As part of the calculation for this schedule, the Wage and Salary Division reviews the number of steps and salary lanes in U.S. urban school jurisdictions to ensure comparability. The survey process takes 12 people a total of 1,680 hours (or 42 workweeks) to complete. The salary schedules for the current school year are usually completed in April or May. (See fig. 3.) Once the salary schedules are complete, Wage and Salary Division personnel meet with representatives from the FEA and agency officials to discuss the results of the survey. Once all parties agree on the results, the new salary schedule is issued. The courts have interpreted the Pay and Personnel Practices Act as requiring that DOD overseas teachers be paid the same salary that the U.S. teachers in DOD’s comparison group receive for the same year. Because the salary schedule is typically issued near the end of the school year, overseas teachers receive their pay increases retroactively. Usually, the overseas teachers receive these increases just prior to the end of the school year. In addition, since some allowances, such as the post allowance, are based on salary, teachers may also receive retroactive payments for allowance increases. This retroactive pay process results in some administrative burden for the agency in terms of workload and cost. First, the process increases the agency’s workload. DOD spends additional time each year processing, reviewing and entering the pay and allowance increases. The Defense Finance and Accounting Service (DFAS) calculates the amount of each teacher’s new salary and retroactive payments, while the DODEA personnel office must correct the official personnel forms for all affected employees. In addition, field staff help recalculate adjustments to any extra duty pay teachers may have received during the year. Once this work is completed, the DODEA payroll office receives the data for record keeping purposes, reviews them, and corrects any coding errors. Second, the process can complicate DODEA’s management of its budget. Each year, DOD officials predict how large the retroactive pay increase will be in order to plan the budget. If this prediction is too low, DODEA personnel must find the necessary funds to pay for the difference. Because payroll comprises over 70 percent of DODEA’s budget, this task can be a difficult one. A large enough difference in the predicted and actual amounts of the pay increase can have an impact on DODEA’s budget. For instance, in school year 2001-02, DODEA officials expected the salary increase to be about 3.6 percent, but it was actually 5.2 percent. As a result, they had to ask the Office of the Secretary of Defense for the necessary funds to address this problem. Finally, the process results in some costs to the agency. DFAS charges DODEA an annual fee for determining and processing the retroactive pay increases. Last year, this fee totaled roughly $78,000. Alternative techniques exist, such as sampling and projection, that could make the salary determination and payment process less time-consuming and less burdensome; however, they cannot meet legal requirements. Given the moderately burdensome nature of the current system, we reviewed the current salary determination method and explored whether alternatives could take less time. While these alternatives might be more efficient, they would not be in compliance with the law. For instance, DOD could project overseas teachers’ salaries each year based on the degree to which salaries for U.S. urban teachers increased in past years. By projecting teacher salaries the salary schedule could be completed prior to the beginning of the school year, rather than near the end. This would eliminate the need to pay teachers retroactively, thus saving time and money. However, because projections would not guarantee the same result as the survey, this method would not meet the law’s requirement that DOD overseas teachers’ salaries be “equal to” the salaries of U.S. urban teachers. Therefore, DOD would still have to survey the U.S. schools, and pay any difference between the projections and the survey results to the teachers retroactively. While alternative methods of salary determination exist, such as sampling, they would not reduce the workload or administrative burden. For more information on alternative salary determination techniques, see appendix I. DOD overseas schools play a critical role, educating more than 70,000 children of parents in the armed services and the federal civilian workforce. To date, agency officials have successfully recruited and maintained a well-qualified teacher workforce for these schools. These well-trained teachers could be a major factor behind the schools’ high student-achievement level. While the salary determination and payment process is time consuming and involves some administrative burden, DOD’s success recruiting and retaining well-qualified teachers indicates that there is no immediate need to change the law. The Department of Defense provided oral comments on a draft of this report. DOD concurred with the content of the report. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Defense, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please call me at (202) 512-7215. Other contacts and contributors to this report are listed in appendix II. The National Defense Authorization Act for fiscal year 2002 directed GAO to assess whether the Department of Defense (DOD) overseas teachers’ compensation package is adequate to recruit and retain qualified teachers and to recommend any necessary revisions to the law governing DOD overseas teachers’ salaries. To address the issues raised in the mandate, we developed three key questions: 1. What is the compensation package for teachers in DOD overseas schools, and how does it compare to compensation for teachers in the United States? 2. To what extent do DOD overseas schools experience difficulties recruiting and retaining well-qualified teachers? 3. What is the process for determining teacher salaries and paying teachers, and which aspects of the process, if any, could be improved? To answer question one, we reviewed laws, regulations, and policies on salary, benefits, and allowances for DOD overseas teachers and other federal civilian employees overseas. We also analyzed salary data on DOD overseas teachers and U.S. teachers and conducted a literature review on teacher compensation in the United States. Finally, we interviewed DOD officials to confirm our understanding of the total compensation package and eligibility rules related to benefits and allowances. To answer question two, we analyzed data on DOD overseas teachers (such as the number of newly hired teachers in each of the past three years; the number of teachers in each school; the number of teachers hired from the United States; the number hired from overseas; and the number who are spouses of DOD military or civilian employees) and reviewed DOD promotional materials, planning documents, and information provided to teachers in the DOD overseas school system. We also interviewed DOD officials and representatives of the two teachers’ unions that represent DOD overseas teachers. Finally, we conducted a literature review on teacher quality and its relation to student performance. To answer question three, we reviewed laws, court cases, arbitration documents, regulations, and policies on the DOD overseas teacher salary determination and payment process. We also interviewed DOD officials about implementation of this process and its impact on the agency. Finally, we explored alternative ways to determine and pay teacher salaries that could potentially improve efficiency and reduce costs. Specifically, we considered the use of sampling and salary projection. We explored stratified sampling as one possible way to determine DOD overseas teachers’ salaries. Using a sample would allow DOD to contact fewer schools to obtain salary data, thus potentially saving time and money. Estimates derived from stratified random samples are typically more precise than estimates derived from simple random samples of the same size. Currently, DOD surveys 231 urban school districts. DOD provided us with data on four salary/education categories, the BA minimum salary (BA min), the BA maximum salary (BA max), the Ph.D. minimum salary (Ph.D. min), and the Ph.D. maximum salary (Ph.D. max), for each of the 231 urban school districts it surveyed for school year 2001-02. We defined strata by dividing the population, all 231 districts, into three groups, based on salary data. We defined the low stratum as those school districts with a BA min value of $28,533 or lower, the high stratum as those school districts with a Ph.D. max of $62,413 or greater, and the medium stratum as any district that did not fall into either of the other strata. This stratification resulted in 60 school districts for the low stratum and 70 districts for the high stratum; the remaining 101 districts were placed into the medium stratum. We examined four different sample sizes: a 20 percent sample, a 30 percent sample, a 40 percent sample, and a 50 percent sample. For instance, for the 20 percent sample we selected 20 percent of the districts in the low stratum, 20 percent of the districts in the medium stratum, and 20 percent of the districts in the high stratum. Table 5 shows the total number of sample districts and the number in each stratum for the four different sample sizes before any adjustment for nonresponse. For the four sample size options, we determined margins of error for the average salaries in each of the four education/salary categories. The margin of error is a measure of how precise the estimates of the average salary are and refers to the fact that these estimates will differ from the average salary calculated using the overall population. These margins of error are presented in table 6. For both the 20 percent sample and the 30 percent sample, at a confidence level of 95 percent, the margins of error in each of the four education/salary categories were all within +/- $1,900 for the average salary. For both the 40 percent sample and the 50 percent sample the margins of error were all within +/-$1,200, at a confidence level of 95 percent. This means that DOD could reduce the size of the annual survey from roughly 230 districts to 141—in the case of the 50 percent sample—with estimated margins of error ranging from +/- $397 to +/- $916, depending on the salary variable. In other words, we would expect with a 95 percent level of confidence that the average BA min salary calculated from the sample would be within +/- $397 of the average salary calculated from the entire survey population. Initially, DOD would have to survey all districts to define the strata but in subsequent years it would rely on this stratification to draw its sample. However, DOD’s efforts to sample would be affected by the stability of the salary strata used. If the school districts in the sample frequently changed strata, then over the course of several years of using the original stratification definitions, there would be increased variability in the estimation. We tested for stability by using DOD’s actual data for 3 years, and found that there was a substantial shift of schools across strata over time. To examine the stability of our strata, we used the salary data DOD provided us for each of the urban school districts it surveyed in school years 1999-00, 2000-01, and 2001-02. Taking the data from the first year, we grouped the school districts into three strata: low, medium and high. We defined the low stratum as those school districts with a BA min value of $27,000 or lower, the high stratum as those with a Ph.D. max of $57,000 or greater, and the medium stratum as those that did not fall into either of the other strata. This same stratification scheme was used for 2 additional years of school district salary data. Thus, the strata definitions were based on the salary data from the first year. In subsequent years, some districts moved from one stratum into another. As they did so, the original stratification no longer reflected the most recent ranking of the school districts’ salaries. As a result, the margins of error for the average salary in each education/salary category increased. For example, the margin of error for the BA min average salary increased from +/- $440.60 in the base year to +/- $697.80 in the third year. In other words, there was a 26 percent deterioration over one year and a 60 percent deterioration over 2 years for the BA min category. Considering the four education/salary categories, the larger the percent deterioration, the greater the movement of districts across strata and the less stable the strata. Table 7 shows the increased margin of error over time, the percent deterioration over time and the salary ranges for each of the four education/salary categories. The percent deterioration was calculated as follows: % deterioration in one year = margin of error in year I – margin of error in year II margin of error in year I % deterioration in 2 years = margin of error in year I – margin of error in year III margin of error in year I Thus, the percent deterioration—increase in the margin of error—can be gauged from the base year, Year I (1999-2000), to Year III (2001-2002). As noted above, the estimated margin of error and the percent deterioration over time indicate that there was considerable shifting over time of districts across strata for all salary/education categories except the BA max. Consequently, if sampling were used, the strata would need to be redefined and new samples selected frequently in order to minimize the variability in the salary estimates. To do this, the entire population of urban school districts would need to be surveyed. We explored projection as a way for DOD to pay overseas teachers their current-year salaries from the beginning of the school year, rather than retroactively. Our projections and the associated margins of error are shown in table 8. We made our projections for 2001-02 based on DOD salary data from school years 1999-00 and 2000-01. We applied a rate-of-change model to the first two years of data to calculate estimates of the annual rate of change for each of our four education/salary categories. Our model took the form a is the estimated rate of change X is the salary from school year 1999-00, and Y is the salary from school year 2000-01. Having calculated values for a, we then substituted in values for school year 2000-01 for X in order to calculate projected average salaries for school year 2001-02. As an example, to calculate the projected mean salary in school year 2001-02 for the BA min category, we used the equation in column two (Y=1.0557*X). For X we substituted 30,701, the value in column three, the actual mean salary for school year 2000-01. Multiplying this value times 1.0557 (the mean increase for BA min from school year 1999-00 to school year 2000-01) gave us the projected mean salary for school year 2001-02 displayed in column five. This projected mean salary will not be the same as the actual mean salary, because salary projections include an assumption about the annual rate of growth in earnings, and this assumed growth rate is likely to differ from the actual growth rate. In the particular examples shown, the mean salaries we projected were similar to the actual mean salaries. However, the projections could fall anywhere between the confidence limits, indicating the variability attached to these projections. Table 7 shows that the 95 percent confidence interval for the BA min salary would range from $29,518 to $35,304. In addition to those named above, Elizabeth Field, Barbara Smith, Kris Braaten, Emily Williamson, Jon Barker, Barbara Alsip, and Patrick DiBattista made key contributions to this report.
|
The Department of Defense (DOD) overseas schools educate more than 70,000 children of military service members and DOD civilian employees throughout the world. In order to ensure the continued success of this school system, the National Defense Authorization Act for Fiscal Year 2002 directed GAO to assess whether the DOD overseas teachers' compensation package is adequate to recruit and retain qualified teachers. The act also required GAO to determine whether any revisions to the law governing DOD overseas teachers' salaries were advisable. DOD overseas teachers' compensation compares favorably to that of U.S. teachers. In general, DOD overseas teachers receive a standard federal benefit package, including health and life insurance and coverage under the Federal Employees' Retirement System. Many DOD overseas teachers also receive allowances, such as a living quarters allowance, that U.S. teachers do not receive. On average, salaries for DOD overseas teachers are higher than U.S. teachers' salaries. Despite the generous compensation package, there is some dissatisfaction among overseas teachers regarding health care. DOD has little difficulty recruiting and retaining well-qualified teachers for overseas schools. In school year 2001-02, DOD recruiters filled over 99 percent of vacant teacher positions. Based on certification, experience, and education, the quality of DOD overseas teachers is high. Virtually all teachers in DOD schools are certified in the subjects or grades they teach. DOD may have some difficulties recruiting and retaining teachers in a few subject areas and geographic locations, but any such difficulties do not appear to threaten the quality of the overseas teachers workforce. DOD has developed a process for determining and paying teachers' salaries that meets statutory requirements. Although this system is time-consuming and burdensome, techniques that could address these difficulties do not meet legal requirements. Given DOD's success recruiting and retaining well-qualified teachers, it is not advisable at this time to revise the law.
|
The Army awards and administers contracts much like the rest of the federal government, adhering in broad terms to a generic contracting life cycle and the Federal Acquisition Regulation. Figure 1 depicts the contracting life cycle. Throughout the contracting life cycle, three key groups have a role in meeting the Army’s needs: requirements generators, contracting professionals, and contractors themselves. Requirements generators are often located in PEOs or LCMCs. PEOs are responsible for large acquisition efforts involving major weapon systems, while LCMCs are responsible for sustaining many of these systems once they have been deployed. Within their respective areas of responsibility, both define requirements in documents such as statements of work, conduct market research, develop cost estimates, and produce written acquisition plans, as necessary. An Army official told us they can also participate in the source selection process and can serve as contracting officers’ representatives, monitoring contractors’ performance on behalf of the government. Contracting professionals such as contracting officers use information submitted by requirements generators to determine the type of contract the Army should award and how best to meet competition requirements. They also develop and publish solicitations requesting proposals from contractors, and, after receiving these proposals, they negotiate with contractors, as appropriate, and lead the source selection process. After contract award, contracting professionals ensure contractors comply with contractual quality assurance requirements and submit required reports in a timely manner, among other things. They also closeout the contract, which involves verifying that products and services were provided and making final payments to contractors. Contractors are responsible for delivering products and services to the Army in accordance with the terms of the contract. These products and services can include major weapon systems, complex research and development activities, and day-to-day administrative support, among other things. This report primarily focuses on management of contracting professionals, but it is important to note that they alone cannot ensure contractors provide the products and services the Army needs, and that all three groups, along with senior leadership, play key roles in the efficient and effective delivery of products and services. The Army’s contracting professionals are led by several senior executives who are responsible for overseeing the department’s contracting functions, delegating contracting authority, and minimizing the risk that contracting officers will perform improper acts. These contracting leaders include the following: The Assistant Secretary of the Army (Acquisition, Logistics and Technology) (ASA(ALT)) is the department’s Senior Procurement Executive. In this role, Army policy establishes for the ASA(ALT) the responsibility to oversee the department’s contracting operations, designating contracting activities, and delegating contracting authority. The Deputy Assistant Secretary of the Army (Procurement) (DASA(P)) supports the ASA(ALT)’s efforts to meet Senior Procurement Executive responsibilities involving policies and procedures. The DASA(P) also oversees and evaluates the Army’s contracting operations. Heads of Contracting Activity (HCA) are responsible for establishing criteria and procedures to ensure that only contracting officers with adequate knowledge and experience award and administer contracts. Additionally, the Army’s HCAs personally approve contracting decisions that reach a certain dollar amount or are particularly complex. The ASA(ALT) has delegated HCA authority to four senior contracting professionals at the following commands: AMC, NGB, USACE, and MEDCOM. Principal Assistants Responsible for Contracting (PARC) are the senior contracting officials within their respective contracting organizations. They are generally expected to report directly to their respective HCAs and are responsible for carrying out authorities HCAs delegate to them. Among other things, PARCs can select, appoint, train, and terminate contracting officers; approve acquisition plans; and waive certain contract requirements. They are also required to minimize the potential for contracting officers to be subjected to undue influence and to protect them from internal or external pressure to perform improper acts. The Army’s four HCAs have appointed a total of 24 PARCs. The Army’s contracting leaders, contracting professionals, and requirements generators reside in different Army organizations. Figure 2 depicts the relationship between some of these individuals within the larger context of the Army’s organizational structure. The Army’s PARCs often develop relationships with particular requirements generators over time. For example, one PARC at MEDCOM tends to work with requirements generators from the U.S. Army Medical Research and Materiel Command. However, the requirements generators do not always reside within the PARCs’ respective organizations. For example, requirements generators at PEO Aviation report to the ASA(ALT), but they are supported by the PARC at Redstone Arsenal— one of the 18 PARCs that reports to the ACC commander, who serves as the AMC HCA. In another example, in fiscal year 2015, 45 percent of the funds obligated by the Aberdeen Proving Ground contracting center were intended to meet the requirements of ASA(ALT) organizations, as opposed to 32 percent intended to meet AMC requirements. Nonetheless, the PARC at Aberdeen Proving Ground reports to the AMC HCA. In recent years, contracting professionals at AMC have executed the bulk of Army contracts, measured by both dollars obligated and actions executed. Figure 3 depicts the dollars obligated and actions executed by each of the four HCA organizations from 2011 to 2015. Appropriated funds, including those appropriated to the Army, can be classified various ways, including by duration of availability. The most common type of appropriation is available for obligation only during one specific fiscal year. Other types of appropriations are available for obligation for a definite period of time in excess of one fiscal year, while others are available for obligation for an indefinite period. At the end of the period of availability for an appropriation available for one fiscal year or for a definite period in excess of one fiscal year, the appropriation expires and is no longer available for incurring new obligations. The obligated and unobligated balance (if any) remain in an expired account for a period of 5 fiscal years, at which point any remaining unexpended balances are canceled and returned to the general fund of the Treasury. Since 2012, the ASA(ALT) and DASA(P) have used quarterly reviews in an effort to assess the overall health of Army contracting and drive improvements in contracting operations. However, they have not used these reviews to consistently evaluate the efficiency and effectiveness of the department’s contracting operations. Instead, several senior Army officials indicated that ensuring funds are obligated before they expire is the key measure for determining contracting success. In 2014, one of the Army’s key strategic planning documents established that contracting operations should adhere to schedule, cost, and performance objectives, but the ASA(ALT) and DASA(P) have not established the timeliness, cost savings, and contractor quality metrics needed to evaluate contracting operations against such objectives. Additionally, the ASA(ALT) has not established the metrics needed to effectively evaluate the size of the department’s contracting workforce. Further, the DASA(P) office has not consistently implemented the program it established to improve the department’s compliance with acquisition policies and regulations. Since 2012, successive ASA(ALT)s have taken intermittent steps to improve evaluations of the Army’s contracting operations, but we found that they have not sustained these efforts. In 2012, recognizing that the Army’s contracting organizations lacked alignment on priorities and metrics, the ASA(ALT) and DASA(P) initiated quarterly Contracting Enterprise Reviews (CER) in an effort to increase enterprise-wide coordination, assess the overall health of Army contracting, and drive improvements in contracting operations. According to officials in the DASA(P) office, the intent of the CERs is to establish Army-wide contracting metrics and benchmarks, and provide opportunities for leadership to make recommendations to improve contracting procedures and increase efficiencies. The ASA(ALT) coordinated with the DASA(P) office to develop the CER metrics, which include the following information that DASA(P) officials collect about each HCA: obligation rates, contract actions, competition rates, and small business participation. The CERs also include self-reported information from the Army’s PARCs. This information is organized by the department’s four contracting organizations—ACC, MEDCOM, NGB, and USACE—and highlights the PARCs’ successes, such as important contract awards, and challenges, such as high attrition rates. In February 2017, 5 years after the ASA(ALT) and DASA(P) initiated the CERs, personnel in the DASA(P) office told us they now consider the CER the best mechanism to communicate contracting information to senior leadership. However, we found that the Army had not taken some basic steps that would help improve contracting procedures and contracting health assessments. For example, we found that DASA(P) lacked documentation of action items coming out of any of the CERs except the one chaired by the ASA(ALT) for the second quarter of fiscal year 2016. Now that the CER is recognized as a key communication tool, DASA(P) officials told us they are working to address such shortcomings. Specifically, they are working to formally define CER policies and procedures and intend to do so by the end of fiscal year 2017. Among other things, they plan to formalize the process for tracking CER action items. DASA(P) representatives acknowledged that there have been inconsistencies in how they detailed the results of prior CER briefings. In addition, they stated that CER briefings had been treated as individual events and not viewed as part of an ongoing process to improve contracting operations across the Army. They explained that the forthcoming CER policies are intended to address these issues. Army officials also plan to take steps in fiscal year 2017 to improve the reliability of the data presented in the CER briefings. We found several discrepancies with the data. For example, obligations and contract action data for certain organizations differed within the same CER briefings depending on whether the data came from the contracting organization or DASA(P). In the third quarter fiscal year 2016 CER briefing, USACE reported that it completed 39,600 contracting actions and obligated $9.01 billion. However, these figures were 14 and 16 percent higher, respectively, than figures DASA(P) officials reported for USACE in the same briefing. DASA(P) representatives told us that the CERs may contain internal discrepancies because different Army organizations pull data from source systems on different dates. However, none of the CER briefings contained caveats or amplifying information to explain any of the data discrepancies. DASA(P) representatives told us that the forthcoming CER policies are intended to improve the reliability of CER data and to clarify the causes of any discrepancies by providing as-of dates for data, among other things. Army officials from contracting centers, PEOs, LCMCs, major commands, and department headquarters all told us that Army leadership focuses on obligation rates more than any other metric to determine whether contracting efforts have been successful. Senior leaders within AMC and ASA(ALT) also told us that the Army has an “obligation culture,” and the primary focus is ensuring that the Army obligates all of its funds before they expire. This focus is partially out of concern that the Army could miss opportunities to use available appropriations to meet the department’s needs if they expire. Additionally, Army leaders are concerned that appropriations in future years may decrease if the Army does not obligate all of its appropriations before they expire because it could appear that the Army was appropriated more funding than it needed. This issue is not unique to the Army, as appropriations are commonly available for obligation only during a specific fiscal year. A recent DASA(P) stated, however, that this limited perspective increases the risk that contractors will not provide the government goods and services in an efficient or effective manner. Further, contracting professionals and requirements generators repeatedly told us that obligating funds simply to prevent them from expiring does not drive the optimal contracting behavior, particularly in terms of cost savings. The ASA(ALT) and DASA(P) have not established the metrics needed to evaluate the Army’s contracting operations in terms of timeliness, cost savings, or contractor quality. In the absence of these metrics, they have not used CERs to consistently evaluate the efficiency and effectiveness of the Army’s contracting operations. As a result, it is unclear whether the department’s contracting operations have improved over time or instead have gotten worse, as several groups of requirements generators, particularly officials from PEOs, have asserted. The Army’s 2014 Campaign Plan—a key strategic planning document— established that contracting operations should adhere to schedule, cost, and performance objectives. In practice, Army officials told us this involves awarding contracts in a timely manner, achieving cost savings through contracting activities, and acquiring quality products and services from contractors. Federal standards for internal control state that management should obtain information that links to an entity’s objectives. However, recent ASA(ALT)s and DASA(P)s have not established the metrics needed to evaluate the efficiency and effectiveness of contracting operations in terms of schedule, cost, or performance, in part, because of methodological disagreements. For example, contracting professionals and requirements generators disagree about how the amount of time it takes to award a contract should be measured. Contracting professionals argue that requirements generators can provide contracting professionals with poorly defined requirements, and, as such, the requirements generators may be responsible for delays that occur later in the contracting life cycle when contracting professionals have lead responsibility. Similarly, contracting professionals stated that they have not come to a consensus on how to measure cost savings attributable to contracting efforts. These cost saving measures could include comparing contractors’ initial bids to the final negotiated prices and avoiding fees, such as licensing fees. As for performance metrics, DASA(P) officials told us that they would like to evaluate contractor performance on a systemic basis. However, they have not identified an effective means to do so, largely due to a shortage of reliable information about contractors’ performance. Through our interviews with Army officials, we identified the following challenges surrounding efforts to assess (1) the timeliness of contract awards, (2) the cost savings attributable to contracting activities, and (3) the quality of contractors’ products and services. Many Army officials consider the timeliness of contract awards the most important factor when it comes to contracting operations. Seven of the eight groups of requirements generators we interviewed stated that their primary concern involving contracting operations is getting their contracts awarded as quickly as possible. In June 2016, the ASA(ALT) reflected this concern following a CER briefing, directing ACC to address inconsistencies in how its contracting centers measure Procurement Action Lead Time (PALT). In general terms, PALT is the amount of time it takes a contracting organization to award a contract after receiving a requirements package from another organization, such as a PEO or LCMC. In September 2016, ACC established specific PALT guidelines for different types of contract actions—for example, 365 days for competitive contracts valued between $50 million and $250 million. However, the DASA(P) determined that the guidelines should only be considered a draft because his office should perform a more comprehensive PALT review and determine those guidelines for contracting operations department-wide. DASA(P) representatives told us they planned to initiate a year-long PALT assessment in May 2017 and establish department- wide PALT guidelines by May 2018. However, neither the ASA(ALT) nor the DASA(P) has formalized this goal as a required deadline. Federal standards for internal control state that management should obtain information that links to an entity’s objectives. Until the DASA(P) representatives complete their PALT assessment, they will not obtain the information needed to establish department-wide PALT guidelines, and top Army leaders will not have the information needed to determine whether the department’s contracting operations are adhering to schedule objectives. As important as timeliness is to requirements generators, timeliness can be at odds with other objectives, including cost savings. For example, one PARC stated that his contracting officers were able to get a better deal on a contract due to 3 weeks of extended negotiations with the contractor. He noted that if his contracting officers were assessed solely on timeliness, then perhaps they would not have taken the extra time needed to negotiate the better deal. As such, Army leaders may need multiple metrics that reflect these real-world tensions in order to develop a comprehensive understanding of the department’s contracting operations. A former DASA(P) told us that he directed the Army’s PARCs to report their organizations’ cost savings attributable to contracting in the fiscal years 2015 and 2016 CER briefings. However, the DASA(P) stated that he did not establish a standard methodology for calculating the savings, and Army contracting officials told us they can use several different methods to determine cost savings associated with contracting. For example, they can: (a) compare a contractor’s initial bid to the final negotiated price; (b) compare the price of a follow-on contract to the price of the predecessor contract; and (c) sum potential contract fees that were not realized, such as licensing fees. While these can all be legitimate methods for measuring savings, Army contracting officials also told us how these different methods could produce misleading results. For example, a contractor could submit an initial bid that is unrealistically high as part of a particular negotiation strategy, which could artificially inflate the savings attributable to contracting. The DASA(P) has not yet established a standard methodology for calculating savings, and, as a result, the CER briefings have not presented the information needed to effectively compare cost savings over time or across contracting organizations. Incomplete data sets have also hindered efforts to calculate cost savings attributable to contracting activities. Only the PARCs from ACC reported any cost savings in CER briefings from the third and fourth quarters of fiscal year 2016. As a result, the extent to which the PARCs at NGB, MEDCOM, and USACE realized any cost savings is unclear. As previously noted, federal standards for internal control state that management should obtain information that links to an entity’s objectives. Until PARCs from NGB, MEDCOM, and USACE report cost savings in CER briefings, the ASA(ALT) and DASA(P) will continue to lack information needed to determine whether the department’s contracting operations are adhering to cost objectives. The Army’s 2014 Campaign Plan established that contracting operations should adhere to performance objectives, and a recent DASA(P) told us that—when it comes to performance—requirements generators are primarily concerned with the performance of contractors, as opposed to the performance of contracting professionals. However, the CER briefings do not include any information on contractor performance—specifically the quality of their products and services. The Contractor Performance Assessment Reporting System (CPARS) is the government-wide database for collecting contractor performance information, and Army officials use information collected through CPARS to assist in source selections. However, Army officials do not use CPARS—or any other system—to provide Army leadership the information needed to compare contractors’ performance over time or across contracting organizations. The CER briefings identify the extent to which Army officials enter information into CPARS, but they do not present any information from CPARS on contractors’ actual performance. In theory, this information could help Army leadership understand the extent to which soldiers are receiving quality products and services, but, in practice, DASA(P) representatives do not feel the CPARS information is reliable for assessing contractor performance. They noted that the information contained in CPARS is subjective and that thousands of required entries are overdue. As a result, they have not yet identified an effective means to collect and report contractor performance data in aggregate. One alternative approach could involve surveys of requirements generators, as suggested in a 2013 report on Army contracting commissioned by the ASA(ALT), although this approach would likely include its own reliability challenges. Federal standards for internal control state that management should obtain information that links to an entity’s objectives. Until DASA(P) representatives identify an effective means to collect and report contractor performance data, top Army leaders will continue to lack information needed to determine whether the department’s contracting operations are adhering to performance objectives. Army officials at all levels we spoke with expressed concerns regarding the size or experience of the Army’s contracting workforce, but Army leaders responsible for contracting have not developed the metrics needed to effectively evaluate the scope of that workforce. For example, all eight groups of requirements generators that we spoke with identified either the size or experience levels of the contracting workforce as a main concern in being able to get their contracts awarded. Similarly, PARCs reporting to two of the Army’s four HCAs told us they faced significant workforce challenges. Our prior work has established that successful acquisition efforts depend on agency leadership investing in the acquisition workforce. According to the Office of the Secretary of Defense, as of September 30, 2016, the size of the Army’s contracting workforce had decreased by 17 percent from fiscal years 2007 to 2016, while the size of the contracting workforces at the Air Force and the Navy increased by 25 percent. Similarly, in December 2015, we found that most Department of Defense components sustained their acquisition workforce levels when faced with sequestration and other cost-cutting measures, but the Army did not. Table 1 shows how the size of the Army’s contracting workforce changed from fiscal years 2007 to 2016, compared to the contracting workforces at the Air Force and the Navy. The ASA(ALT) has struggled in recent years to identify how large the Army’s workforce should be based on its workload, and the responsibility for doing so has shifted from one deputy to another. The 2014 Army Campaign Plan assigned DASA(P) responsibility for determining the appropriate size of the contracting workforce based on the Army’s mission requirements and deployments around the world. However, a recent DASA(P) told us there is currently no mechanism in place to meet this requirement, and explained that the Deputy Assistant Secretary of the Army (Plans, Programs, and Resources) is leading an effort to develop a contracting workforce model. The Deputy Assistant Secretary of the Army (Plans, Programs, and Resources), who also reports to the ASA(ALT), previously established a workforce model for program management. The Deputy Assistant Secretary’s representatives told us they collected data on contracting activities spanning the contracting life cycle, from pre- award tasks through contract closeout, for their contracting workforce model in order to establish ratios between the amount of dollars the Army obligated in a given fiscal year and the contracting personnel on hand in the department at that time. They had expected the U.S. Army Manpower Analysis Agency would validate the contracting workforce model in fiscal year 2016, but, due to data collection issues, they now anticipate it will be validated in fiscal year 2018. Officials from the Office of the Deputy Assistant Secretary of the Army (Plans, Programs, and Resources) stated that they are working to convert their data into a format the U.S. Army Manpower Analysis Agency uses to get the model validated—an important first step in determining the Army’s contracting workforce requirements. However, current plans indicate that the model will be based on past workload data and will not account for future changes involving several workload drivers, such as the type of contracting action, whether the acquisition was a product or service, or the source selection process. Therefore, the model could produce misleading results and understate the number of contracting officers needed to meet the Army’s demands. The Deputy Assistant Secretary’s representatives stated that the model will not account for changes in these variables because it is intended to serve as an initial baseline. They acknowledged that the current model is not the final solution for determining the Army’s contracting workforce requirements and that it is a work in progress. Federal standards for internal control state that management should internally communicate quality information in order to achieve objectives. Army leaders will need more robust information to effectively evaluate the department’s contracting workforce and determine whether requirements generators’ concerns are valid. Until the ASA(ALT) and DASA(P) obtain this information, they will not know what steps are needed to enhance the department’s contracting workforce. Since 2008, DASA(P)—responsible for overseeing the Army’s contracting operations—has been designated the lead Army official responsible for the department’s Procurement Management Review program, which is intended to ensure compliance with federal, defense, and Army acquisition policy and regulations. Individual reviews of contracting offices culminate in an overall organizational risk assessment of low, medium, or high; and corrective action plans to address findings and recommendations. This program is formally defined in the Army’s Federal Acquisition Regulation Supplement, which states that it shall be implemented in a tiered manner, with the PARCs, HCAs, and DASA(P) conducting complimentary assessments. Specifically, the guidance states that: reviews will be conducted for all contracting offices at least once every 36 months; every HCA is to submit an Annual Summary Health Report to DASA(P) by October 31 each year; and DASA(P) officials will produce a holistic assessment of Army contracting annually by January 31 each year. However, DASA(P) has not ensured reviews are conducted consistently and within a 36-month period, largely because contracting organizations told DASA(P) they did not have the staff needed to do so. For example, DASA(P) allowed NGB officials to put their reviews on hold from August 2015 to January 2016 in order to focus on training their contracting officers. Similarly, at the beginning of fiscal year 2017, DASA(P) approved an ACC request for a 12-month extension to complete its reviews due to personnel shortages. Four years earlier, in fiscal year 2013, ACC did not conduct a review due to budget constraints. In another case, a DASA(P) representative told us that USACE did not submit an Annual Summary Health Report for fiscal year 2015 and did not provide DASA(P) a reason for not doing so. Further, when we reviewed 12 Annual Summary Health Reports provided by DASA(P)—spanning fiscal years 2013 through 2015—we found that some contracting offices within ACC, NGB, and USACE had not been sufficiently evaluated during the previous 36 months. Only MEDCOM had reviewed all of its contracting offices in the time frame suggested by the Army’s Federal Acquisition Regulation Supplement. In addition, DASA(P) did not produce the required holistic, independent assessment of Army contracting since 2012. DASA(P) officials stated that they could not produce holistic assessments of Army contracting during that time because of staffing shortages. Because DASA(P) did not implement the program in accordance with Army guidance during this time, the HCAs or PARCs self-reported their compliance ratings to the ASA(ALT) without independent DASA(P) reviews, as intended. This may have increased the likelihood that their findings contained errors or bias, and that the PARCs’ corrective action plans did not fully reflect the DASA(P)’s perspective. Previously, in fiscal years 2010 and 2011, DASA(P) had identified that more than 30 percent of the contracting offices they independently reviewed were at risk of serious adverse impacts to contracting operations. The deficiencies causing the adverse impacts could include violations of policies and statutes, as well as missing documentation. To resume the independent reviews, at the beginning of 2017, DASA(P) representatives said that they hired a new staff member to produce the holistic assessments, and that this staff member will initially produce an assessment covering fiscal years 2014 through 2017 to account for years that DASA(P) previously missed. The hiring of this staff member is a positive step; however, it is too early to tell if this action alone will improve the rigor of the Procurement Management Review program. In recent years, ASA(ALT)s have taken positive intermittent steps to improve the Army’s contracting operations, but they have not sustained these efforts. For example, in 2012, the ASA(ALT) initiated an effort intended to improve contracting operations by commissioning a study focused on courses of action that could strengthen the oversight, execution, and accountability of Army contracting. Among other things, this study noted that the metrics reported in the CERs focused on the administrative activities of the contracting organizations and did not measure the quality of their operations. This study included a series of recommendations intended to strengthen contracting oversight, and the ASA(ALT) issued a memorandum in October 2013 supporting the study’s recommendations. However, the ASA(ALT) at that time did not issue guidance for implementing the recommendations until January 2016, just prior to when this official left the Army. The January 2016 guidance directed Army officials to develop criteria for evaluating the efficiency and effectiveness of contracting operations, among other things, but DASA(P) officials noted that the ASA(ALT)’s successor did not implement the guidance and did not offer a reason for not doing so. Instead, the new ASA(ALT) issued new guidance intended to improve the Army’s contracting operations through different, but similar, actions. The new guidance focused on: (1) eliminating redundant layers of management and oversight; (2) improving the accountability and transparency between contracting operations and its customers; and (3) improving the contracting workforce and workload. However, similar to the preceding ASA(ALT), the new ASA(ALT) issued this guidance one day before leaving the Army. This again left the responsibility for implementing newly stated contracting guidance to a successor ASA(ALT). Officials from the DASA(P) office told us that the current ASA(ALT) supports her predecessor’s efforts and intends to implement the guidance, but they also said that the next ASA(ALT) may not support the efforts, so they want to implement the guidance before the next transition. Those who serve in the ASA(ALT) position may have reasons for not implementing their predecessors’ contracting policies, but if that is so, they should document and disseminate those reasons. We have previously found that leadership must provide clear and consistent rationales to effectively drive organizational transformations, and federal standards for internal control state that management should internally communicate quality information to achieve the entity’s objectives. Given the rate of turnover in the ASA(ALT) position, it is critical that individuals in that role provide their successors and the Army’s contracting workforce their rationales for key decisions, particularly when the decisions differ from their predecessors’ guidance. Without this information, future ASA(ALT)s may be deprived of critical insights that could help them improve Army contracting operations going forward or, at a minimum, help them avoid missteps that could degrade the effectiveness and efficiency of Army contracting. In another example of inconsistent leadership, some ASA(ALT)s did not consistently chair CERs, even though the CER briefings are the main way for contracting leaders to convey contracting information to the ASA(ALT). Beginning with the implementation of the CERs in 2012, one ASA(ALT) chaired quarterly CERs regularly through fiscal year 2015. However, a new ASA(ALT) assumed the position in fiscal year 2016 and, according to DASA(P) officials, chaired only one CER due to competing management priorities, such as the acquisition of major weapon systems. Following the CER briefing that the ASA(ALT) attended, the ASA(ALT) directed ACC to take several specific actions that could help improve contracting operations, such as determining measures of contracting timeliness and methods to minimize workforce reductions. However, this was the only time that ASA(ALT) used a CER to direct improvements in contracting operations. We have previously found that leadership must set a tone at the top and demonstrate strong commitment to improve and address key issues. By not attending the CERs or otherwise providing feedback on the CER briefings, it may unintentionally send a signal to contracting staff that contracting issues are not a high priority. Additionally, ASA(ALT)s may miss opportunities to improve the Army’s contracting procedures and increase efficiencies. From 2008 through 2016, top Army leaders repeatedly changed reporting relationships across the department’s contracting organizations, but they did not establish the measurable objectives needed to determine whether these changes were successful or if the benefits of the changes outweighed the costs to implement them. For example, successive ASA(ALT)s made organizational changes to centralize contracting decision-making, while a Secretary of the Army and an AMC commanding general made organizational changes intended to improve support to field operations. However, we found the Army did not establish the goals in terms of measurable objectives, so the degree to which the Army has achieved the goals is unclear. Additionally, officials from eight different Army organizations told us that the changes led to disruptions in contracting operations and caused confusion. Senior officials responsible for the changes acknowledged the need for measurable objectives to evaluate how the changes have affected contracting operations. However, these officials have not yet agreed upon specific metrics. From 2008 to 2016, successive ASA(ALT)s, a Secretary of the Army, and an AMC commanding general repeatedly changed reporting relationships across the department’s contracting organizations in efforts to improve both contracting and field operations. See figure 4 for a timeline identifying major changes these Army leaders made from 2008 to 2016. In fiscal year 2007, the Army had 15 HCAs, including 8 within AMC. Between 2008 and 2016, the ASA(ALT) rescinded, consolidated, and reassigned HCA authority seven times in efforts to improve contracting operations, consolidate or clarify contracting professionals’ roles, and streamline procedures. In 2012, 4 years after the Secretary of the Army created ACC, the ASA(ALT) delegated HCA authority to the commanding generals of the ACC’s two subordinate commands: the Expeditionary Contracting Command and the Mission and Installation Contracting Command, which were created to provide contracting support to Army installations around the world. In 2013, the ASA(ALT) reduced the number of HCAs at MEDCOM from two to one. MEDCOM officials explained the ASA(ALT) took this action in order to clarify the role of contracting within the organization, among other reasons. In 2014, the ASA(ALT) consolidated HCA authority at AMC, reducing the number of HCAs from nine to one in order to address unclear and overlapping contracting authority within the command. In February 2015, ASA(ALT) transferred HCA authority from PEO Simulation, Training and Instrumentation—which acquires training and testing systems for the Army—to AMC when the head of the PEO’s contracting center started reporting to the ACC commander. In June 2015, the ASA(ALT) transferred HCA authority from the U.S. Central Command-Joint Theater Support Contracting Command— which was responsible for contracting operations in Iraq and Afghanistan—to AMC when AMC contracting units took over the theater support contracting mission. In January 2016, the ASA(ALT) rescinded HCA authority from the Intelligence and Security Command due to poor communication between its HCA and PARC. In December 2016, ASA(ALT) reassigned HCA authority from the three commanding generals at AMC, NGB, and USACE to contracting professionals within those organizations. According to a former DASA(P), with these delegations, the ASA(ALT) emulated the MEDCOM model—where the Deputy Chief of Staff, Procurement is the HCA—to help ensure that the HCAs are individuals with contracting experience who can focus on contracting issues, rather than commanding generals who have broader responsibilities. During the same period, 2008 through 2016, the Secretary of the Army and an AMC commanding general changed reporting relationships involving a total of 18 PARCs in efforts to improve the Army’s field operations. Specifically, these changes were intended to improve coordination between contracting centers, deployed forces, and logisticians within LCMCs, among others. Figure 5 identifies AMC’s contracting organizations at the beginning of fiscal year 2008 and at the end of calendar year 2016—before and after the reporting relationships changed. In 2008, the Secretary of the Army dissolved the Army Contracting Agency, which had reported to the ASA(ALT), and realigned its units under the newly created ACC—a subordinate command under AMC. The Secretary of the Army also gave operational control of other AMC contracting centers to ACC. These changes were intended to improve contracting centers’ relationships with Army units that conduct operations around the world. Eight years later, in February 2016, an AMC commanding general issued an Operation Order (OPORD) which reassigned operational control for 3 PARCs from ACC to 3 LCMCs, and tactical control for 6 other PARCs to the Army Sustainment Command. ACC retained administrative control over these PARCs. According to AMC officials, leaders made this change in order to better integrate the efforts performed by AMC’s subordinate commands. For example, AMC leaders explained that some PARCs should report to LCMCs because the LCMC commanders have the experience and perspective necessary to prioritize the PARCs’ work effectively. The LCMC commanders can review contracting priorities with both requirements generators and PARCs to ensure that they are all in agreement. It is unclear whether the benefits of the major organizational changes Army leaders made from 2008 to 2016 have outweighed the costs because Army leaders did not establish the measurable objectives needed to assess the effectiveness of the changes. For the same reason, it is also unclear whether the changes have led to the desired outcomes. For example, when changing reporting relationships in 2016, including the OPORD changes and HCA reassignments, neither the AMC commanding general nor the ASA(ALT) established the measurable objectives needed to assess the costs and benefits of these changes. The AMC commanding general issued the OPORD in an effort to better integrate AMC’s subordinate commands, but the OPORD lacks measures needed to evaluate progress toward better integration, making it difficult to assess the OPORD’s long term effect. Similarly, the ASA(ALT) reassigned HCA authority in an effort to increase leadership focus on contracting but did not establish measures needed to evaluate the extent or benefits of increased leadership focus. Federal standards for internal control state that agency management should define objectives in measurable terms so that performance toward achieving objectives can be assessed. In addition, the Office of Management and Budget’s guidelines for assessing an agency’s acquisition function state that agencies should assess their current organizational structure in response to organizational changes, and use outcome-oriented performance measures to assess the success of the acquisition function in order to support the agency’s missions and goals. Senior officials involved with the major organizational changes acknowledged the need for measurable objectives, but these officials have not yet agreed upon specific metrics. For instance, AMC officials told us that they are working to develop and implement metrics to assess contracting operations at regularly scheduled reviews but have not yet finalized the metrics. Additionally, Army officials have not linked the major organizational changes to metrics for measuring the timeliness of contract awards, cost savings attributable to contracting activities, and the quality of contractors’ products and services. Measurable objectives are particularly important for the Army because the major organizational changes have come with costs, causing confusion and leading to disruptions in contracting operations. We have previously found that productivity often decreases after major organizational changes. For example, employees may be unsure how to conduct day- to-day operations while transitioning to a new organizational structure. The changes Army leaders made to reporting relationships across the department’s contracting organizations were not exceptions to this rule. Officials from eight Army organizations said the changes led to disruptions in contracting operations and confusion among subordinate commands. For example, officials from six Army organizations told us that the OPORD delayed contract reviews or led to duplicative meetings while AMC personnel were determining how to operate within the new organizational structure. Similarly, contracting officials from three Army organizations told us that when an ASA(ALT) reassigns HCA authority, as an ASA(ALT) did in December 2016, workloads increase at lower staff levels until the new HCA re-delegates certain decision-making authorities to the respective PARCs. This can prevent staff from focusing on their primary responsibilities and lead to less efficient operations. In the absence of measurable objectives and authoritative data to assess the effectiveness of organizational changes, disagreements over the risks and benefits of some of the most recent changes have increased tensions between officials in the ASA(ALT) office and at AMC. When the AMC commanding general issued the OPORD in February 2016, officials in the ASA(ALT) office and in units subordinate to the ASA(ALT) expressed a number of concerns with potential implications of the OPORD, including the potential for LCMC commanders to influence PARC decisions inappropriately. Subsequently, ASA(ALT) officials proposed centralizing all contracting authority across Army contracting under the Office of the ASA(ALT), specifically in DASA(P). DASA(P) argued that centralizing HCA authority across Army contracting would better delineate contracting and command authority, retain contracting decisions within the contracting chain, encourage responsiveness to customer needs, and promote the independence of the contracting function. In order to centralize HCA authority into a single office, the ASA(ALT) would have rescinded the HCA authority for all the HCAs, including the AMC commander. AMC officials said they strongly opposed this proposal and told us it would degrade communication and integration between its subordinate units, displace the mission command of the Army, and ultimately lead to the isolation of subordinate contracting units. Ultimately, in December 2016, the ASA(ALT) instead reassigned HCA authority from three commanders, including the AMC commander, to contracting professionals within their organizations. As noted earlier, the ASA(ALT) did so to help ensure that the HCAs are individuals with contracting experience who can focus on contracting issues, rather than commanding generals who have broader responsibilities and do not necessarily have contracting expertise. Officials at AMC were receptive to the change, but its benefits are not universally recognized. According to DASA(P) officials, one of the two other commanders that lost HCA authority was strongly opposed to the change, and the lack of measurable objectives deprives proponents of quantitative data that could be used to address such concerns. In such an environment—characterized by a lack of measurable objectives and authoritative data, and increased tensions between officials in the ASA(ALT) office and at AMC—some of the Army’s key contracting leaders and requirements generators are concerned about the effectiveness of the current organizational structures. We found that Army personnel at six organizations were concerned that the OPORD created the potential for LCMC commanders to sway contracting to favor their own requirements or goals when it gave the LCMC commanders operational control over three PARCs. Army officials at two of those six organizations expressed concern that LCMC commanders might prioritize their own contracting requirements ahead of requirements from other organizations that depend on the three PARCs to meet their contracting requirements—particularly PEOs. Further, we heard officials voice concerns that LCMC commanders might pressure the PARCs to act against their best judgment in order to meet LCMC goals. For instance, officials expressed concerns that LCMC commanders might pressure PARCs to rush contract negotiations in order to more quickly acquire spare parts for fielded systems. In addition, some PARCs were concerned about the changes in rating chains, as LCMC commanders are now responsible for rating the three PARCs’ performance but have a limited understanding of contracting issues and the work the PARCs do for customers other than LCMCs. Officials told us that these concerns have not been realized to date, but, given the relatively short amount of time that has passed since the AMC commander issued the OPORD, it is still possible that they may be realized in the future. It is important to note that such concerns are often inherent in organizational structures where contracting professionals report to requirements generators. The Army’s contracting professionals are critical to the department’s efforts to execute its missions. However, Army leadership has taken a relatively narrow view of the department’s contracting operations. By primarily focusing on ensuring that contracting officers are obligating funding before it expires, Army leadership has in effect promoted a “use or lose” perspective and deemphasized the efficiency and effectiveness of contracting operations. Senior leaders responsible for contracting are not systematically assessing the timeliness of contract awards, cost savings attributable to contracting activities, or the quality of contractors’ products and services. Additionally, they are not identifying whether they have a large enough workforce to meet the department’s contracting needs. As a result, Army leadership does not have the quantitative data necessary to determine whether the department’s contracting enterprise has the capacity needed to operate in an efficient and effective manner. Moreover, as the ASA(ALT) is the Army’s senior most contracting official, it is important to set a tone at the top that contracting issues are a priority. When ASA(ALT)s do not attend or weigh in on CERs, they signal that contracting is not a priority and diminish the concerns raised by contracting organizations; they also miss opportunities to improve contracting. Some senior Army leaders have taken intermittent steps to improve the department’s contracting operations, but inconsistent oversight reviews and leadership turnover have stalled key efforts. Leadership turnover will continue in the future, and, for this reason, it is critical that senior leaders—particularly ASA(ALT)s—document rationales for key decisions, especially when these decisions run counter to their predecessors’ thinking. Finally, when senior leaders make organizational changes intended to improve operations, they must also establish measurable objectives that will allow them to assess progress toward their ultimate goals. In the absence of such measures, it is unclear whether issues that necessitated the organizational changes have been addressed, and whether contracting outcomes are improving or getting worse. Importantly, without measurable objectives, it is difficult to counter any dissent and confusion among affected parties and support the merits of undertaking the changes. To help Army leadership obtain the information needed to evaluate and improve contracting operations, we recommend the Secretary of the Army take the following eight actions: 1. Ensure the ASA(ALT) and DASA(P) establish and implement CER metrics to evaluate the timeliness of contract awards, cost savings attributable to contracting activities, and the quality of contractors’ products and services. 2. Ensure the ASA(ALT) and DASA(P) formally establish May 2018 as the required deadline for DASA(P) representatives to establish department-wide PALT guidelines. 3. Ensure the ASA(ALT) and DASA(P) establish a standard methodology for PARCs to calculate the cost savings they report in CER briefings; and ensure PARCs from NGB, MEDCOM, and USACE use the methodology to report their respective cost savings. 4. Ensure the ASA(ALT) and DASA(P) identify an effective means to collect and report contractor performance data. 5. Ensure the ASA(ALT) accurately determines the department’s contracting workforce requirements in accordance with the Army’s needs. 6. Ensure future ASA(ALT)s document their reasons for not implementing their predecessors’ contracting policies, as applicable. 7. Ensure ASA(ALT)s consistently chair or otherwise provide feedback on quarterly CERs in order to demonstrate commitment to improving contracting operations. 8. Ensure that Army leaders establish measurable objectives for organizational changes, such as (a) the February 2016 AMC OPORD, and (b) the December 2016 HCA delegations. The Department of the Army provided written comments on a draft of this report. These comments are reprinted in appendix II and are summarized below. In the written comments, the Department of the Army generally concurred with our eight recommendations, although it did not concur with one part of our first recommendation. The Army concurred with two of the three parts of our first recommendation to establish CER metrics that evaluate (a) the timeliness of contract awards, and (b) cost savings attributable to contracting activities. The Army did not concur with the third part, in which we recommended that the ASA(ALT) and DASA(P) establish and implement a CER metric to evaluate the quality of contractors’ products and services. The Army stated concerns about using such a metric to measure the performance of the Army’s contracting organizations because contractor performance is related to multiple variables, including the quality of government oversight, many of which are not within the control of the Army’s contracting organizations. We agree that many factors beyond the contracting organization’s control can impact contractor performance. Nonetheless, to fulfill their contracting oversight responsibilities, we continue to believe the ASA(ALT) and DASA(P) should establish and implement a CER metric to evaluate the quality of contractors’ products and services; and identify how the quality of contractors’ products and services change over time, and how this quality varies across the Army’s contracting organizations. Further, this information should be included in the CERs because personnel in the DASA(P) office told us they consider the CERs the best mechanism to communicate contracting information to senior leadership. Regarding our second and third recommendations that the Army establish a deadline for department-wide PALT guidelines and establish a standard methodology for PARCs to calculate cost savings, the Army concurred and plans to establish guidance by June 2018 and will work with the PARCs to develop a cost saving methodology to incorporate into the CER briefings by the second quarter of fiscal year 2018. Regarding our fourth and fifth recommendations to collect and better report contractor performance data and accurately determine the department’s contracting workforce requirements, the Army concurred, and said it plans to focus on ensuring all contractor performance assessment reports are timely and complete. The Army said it also plans to take a number of actions to include validating and implementing a new predictive resource staffing model for the contracting workforce. The Army concurred with our sixth recommendation to document ASA(ALT) reasons, as applicable, for not implementing predecessors’ contracting policies and stated that ASA(ALT) policies and procedures remain in effect until new direction is issued or the policy is formally rescinded with an explanatory rationale. The Army concurred with our seventh recommendation to ensure ASA(ALT)s consistently chair or provide feedback on quarterly CER briefings and is developing a standard operating procedure that will explain how feedback, improvements, and recommendations will be communicated to the contracting workforce. The Army concurred with our eighth recommendation, which involves measurable objectives for organizational changes. The Army proposed several actions to address this recommendation, including developing and implementing a methodology to gauge the effectiveness of HCA delegations and future organizational changes. We believe the proposed actions may do so, but only if Army leaders measure the success of all major organizational changes affecting the department’s contracting operations, including those changes initiated by organizations outside the Office of the ASA(ALT), such as AMC. The Army also provided technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to the Secretary of the Army and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of this review were to examine the extent to which Army leaders have evaluated (1) the efficiency and effectiveness of contracting operations, and (2) the effects of organizational changes on contracting operations. To address our objectives, we examined several key documents, including the 2007 report of the Gansler Commission on Army Acquisition and Program Management in Expeditionary Operations, a seminal work on Army contracting. We also reviewed a 2013 report on Army contracting commissioned by the Assistant Secretary of the Army (Acquisition, Logistics and Technology) (ASA(ALT)), which included recommendations for transforming Army contracting’s organizational structure and operations. Additionally, we examined strategic guidance, such as the Army’s Campaign Plan for 2014, and regulatory documents, such as the Army’s Federal Acquisition Regulation Supplement. Further, we reviewed documents issued by senior Army leaders, including an Operation Order (OPORD) a commanding general of the Army Materiel Command (AMC) issued in February 2016, which changed the reporting relationships for three of its contracting centers; and a memo an ASA(ALT) issued in October 2016, which directed the implementation of new contracting initiatives. In addition, we obtained Army Audit Agency reports on various Army contracting issues. To address our objectives, we also interviewed personnel responsible for overseeing contracting operations throughout the Army. Specifically, we spoke to Army leaders in the Office of the ASA(ALT), including the ASA(ALT)’s Principal Military Deputy, a Deputy Assistant Secretary of the Army (Procurement) (DASA(P)), an acting DASA(P), and an acting Deputy Assistant Secretary of the Army (Plans, Programs, and Resources). We also interviewed three of the Army’s four Heads of Contracting Activity (HCA): the Director of Acquisitions at the National Guard Bureau (NGB), the Director of Contracting at the U.S. Army Corps of Engineers (USACE), and the Chief of Staff, Procurement at U.S. Army Medical Command (MEDCOM). We interviewed the deputy to the HCA at Army Materiel Command (AMC)—the Deputy Commander of the Army Contracting Command (ACC)—as well as the Deputy Commanding General of AMC. We also interviewed Army officials from seven other contracting organizations and eight organizations that generate contracting requirements: five Program Executive Offices (PEO), and three Life Cycle Management Commands (LCMC). We also interviewed officials from the Army Sustainment Command. Table 2 identifies the specific organizations. To assess the extent to which Army leaders have evaluated the efficiency and effectiveness of contracting operations, we reviewed key contracting oversight documents. Specifically, we obtained and examined 12 Annual Summary Health Reports produced by Army officials from fiscal years 2013 to 2015. The Army’s Federal Acquisition Regulation Supplement requires that HCAs produce these reports as yearly assessments of their respective contracting operations. We also obtained and examined all of the Army’s quarterly Contracting Enterprise Review (CER) briefings for fiscal years 2015 and 2016, and documentation of the action items the ASA(ALT) issued following the second quarter fiscal year 2016 briefing; these were the only CER action items Army officials had documented through fiscal year 2016. The DASA(P) office develops the CER briefings in order to provide the ASA(ALT) and DASA(P) quarterly assessments of the overall health of Army contracting and to allow management to drive improvements. The CER briefings contain data DASA(P) officials obtain directly from source systems, such as the Federal Procurement Data System – Next Generation, and self-reported information from the Army’s Principal Assistants Responsible for Contracting (PARC). In order to assess the reliability of the data in the CER briefings, we reviewed the data from each of the quarterly documents from fiscal years 2015 and 2016, made comparisons, and identified discrepancies. We also reviewed source documents, such as the Annual Summary Health Reports and the 2013 Army contracting study. Further, we interviewed DASA(P) officials responsible for coordinating, compiling, and reviewing the CER data, and officials from Army contracting organizations that provided data and information presented in the CER briefings. We determined that the data contained in the CER briefings were not sufficiently reliable for examining the Army’s contracting operations, and our discussion in the report of CER data focuses on these limitations. In order to obtain contextual information about the Army’s contracting evaluations, we interviewed officials from Army contracting organizations that provide DASA(P) officials data and information presented in the CER briefings, as well as DASA(P) officials responsible for coordinating, compiling, and reviewing the CER data. To assess the extent to which Army leaders have evaluated the effects of organizational changes on contracting operations, we identified major changes Army leaders made from 2007 through 2016. We focused on this time period because it accounts for the changes Army leaders made following the Gansler Commission’s 2007 report on the Army’s contracting, including the creation of ACC. We reviewed key documents surrounding these changes, including the 2009 general order from the Secretary of the Army that created ACC. We also reviewed memos ASA(ALT)s issued between 2007 and 2016, including six that rescinded, consolidated, and reassigned HCA authority; and others that focused on the implementation of new processes and practices for contracting operations. In addition, we reviewed a memorandum assembled by ASA(ALT) representatives that provided perspectives on the OPORD AMC issued in February 2016. It included observations from 7 Deputy Assistant Secretaries of the Army and 12 PEOs. In order to obtain different perspectives on the Army’s organizational changes, we interviewed Army leaders from the Office of the ASA(ALT) and AMC, HCAs, PARCs, and requirements generators. In particular, we interviewed Army officials from the three contracting centers affected by the February 2016 AMC OPORD, as well as LCMCs, PEOs, and Army Sustainment Command. We conducted this performance audit from May 2016 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact listed above, W. William Russell (Assistant Director), Nathan Tranquilli (Assistant Director), Lauren A. Friedman, and Stephen V. Marchesani made significant contributions to this report. Peter Anderson, Kristine Hassinger, Julia Kennon, and Robin Wilson also contributed.
|
In recent years, GAO and other organizations have raised concerns about Army contracting operations, which directly affect a wide range of Army activities. In fiscal year 2016 alone, the Army obligated more than $74 billion through contract actions. GAO was asked to examine the Army's contracting operations. This report assesses the extent to which Army leaders have evaluated (1) the efficiency and effectiveness of contracting operations and (2) the effects of organizational changes on contracting operations. GAO reviewed reports on Army contracting commissioned by the Secretary of the Army and an ASA(ALT); ASA(ALT) memos; Army guidance reorganizing AMC; and Army-wide contracting oversight briefings from fiscal years 2015 and 2016. GAO also interviewed personnel in the Office of the ASA(ALT), at AMC, and other contracting organizations. Top Army leaders conduct department-wide contracting reviews, but they have not consistently evaluated the efficiency and effectiveness of the department's contracting operations. Instead, they have primarily focused on efforts to obligate funds before they expire, as well as competition rates and small business participation. In 2014, one of the Army's key strategic planning documents established that contracting operations should adhere to schedule, cost, and performance objectives, but Army leaders have not established the timeliness, cost savings, and contractor quality metrics needed to evaluate contracting operations against such objectives. Without adequate metrics, Army leaders will not have the information needed to determine whether Army contracting operations are meeting the department's objectives. Since 2012, Army leaders, including successive Assistant Secretaries of the Army (Acquisition, Logistics and Technology) (ASA(ALT)), have acknowledged a need for improvements in contracting and have taken positive intermittent steps, but GAO found that these leaders did not sustain the efforts or—alternately—provide a rationale for not doing so. GAO has previously found that leadership must provide clear and consistent rationales to effectively drive organizational transformations. If Army leadership does not document its rationale for key decisions, the Army's contracting organizations may be missing critical information to effectively improve operations going forward. Top Army leaders have not evaluated the effects of major organizational changes on contracting operations despite repeatedly changing reporting relationships across contracting organizations since 2008, when the Secretary of the Army created the Army Contracting Command. The number of changes has increased since 2012, with five major changes in 2016. Some Army leaders made organizational changes to centralize contracting decision-making, while others made changes intended to improve support to field operations. When Army leaders made these changes, they did not establish measurable objectives in accordance with federal standards for internal control, and officials from eight different Army organizations told GAO that the numerous changes disrupted contracting operations and caused confusion. Further, GAO found that disagreements over the associated risks and benefits have increased tensions among officials in the ASA(ALT) office and at the Army Materiel Command (AMC). In the absence of measurable objectives and authoritative data, it is unclear whether the benefits of the changes outweighed the costs to implement them. GAO is making eight recommendations to improve the Army's contracting operations such as: developing metrics to assess contracting operations for timeliness, cost savings, and contractor quality; documenting rationales for key decisions; and establishing measurable objectives to assess the effects of organizational changes on contracting operations. The Army generally concurred with GAO's recommendations, but did not agree to establish a contractor quality metric because contracting organizations cannot control all variables that affect quality. GAO continues to believe this action is needed as discussed in the report.
|
Internal control generally serves as a first line of defense for public companies in safeguarding assets and preventing and detecting errors and fraud. Internal control is defined as a process, effected by an entity’s board of directors, management, and other personnel, designed to provide reasonable assurance regarding the achievement of the following objectives: (1) effectiveness and efficiency of operations; (2) reliability of financial reporting; and (3) compliance with laws and regulations. Internal control over financial reporting is further defined in the SEC regulations implementing Section 404 of the Sarbanes-Oxley Act.regulations define internal control over financial reporting as a means of providing reasonable assurance regarding the reliability of financial reporting and the preparation of financial statements, including those policies and procedures that: pertain to the maintenance of records that, in reasonable detail, accurately and fairly reflect the transactions and dispositions of the assets of the company; provide reasonable assurance that transactions are recorded as necessary to permit preparation of financial statements in conformity with generally accepted accounting principles, and that receipts and expenditures of the company are being made only in accordance with authorizations of management and directors of the company; and provide reasonable assurance regarding prevention or timely detection of unauthorized acquisition, use, or disposition of the company’s assets that could have a material effect on the financial statements. Regulators regard an effective internal control system as a foundation for high-quality financial reporting by companies. Title IV, Section 404 of the Sarbanes-Oxley Act, aims to help protect investors by, among other things, improving the accuracy, reliability, and transparency of corporate financial reporting and disclosures. Section 404 has the following two key sections: Section 404(a) requires company management to state its responsibility for establishing and maintaining an adequate internal control structure and procedures for financial reporting and assess the effectiveness of its internal control over financial reporting in each annual report filed with SEC. In 2007, SEC issued guidance for management regarding its report on internal control over financial reporting. Section 404(b) requires the firms that serve as external auditors for public companies to provide an opinion on the internal control assessment made by the companies’ management regarding the effectiveness of the company’s internal control over financial reporting as of year-end. In 2007, PCAOB issued Auditing Standard No. 5, which contains the requirements that apply when an auditor is engaged to perform an audit of management’s assessment of the effectiveness of internal control over financial reporting. While management is responsible for the implementation of an effective internal control process, the external auditor obtains reasonable assurance to provide an opinion on the effectiveness of a company’s internal control over financial reporting through an independent audit. Investors need to know that the financial statements on which they make investment decisions are reliable. The auditor attestation process involves the external auditor’s testing and evaluation of the company’s internal control over financial reporting and relevant documentation in order to provide an opinion on the effectiveness of the company’s internal control over financial reporting as of year-end; a company’s internal control over financial reporting cannot be considered effective if one or more material weaknesses exist. Auditor attestation of the effectiveness of internal control over financial reporting has been required for public companies with a public float of $75 million or more (accelerated filers) since 2004. However, SEC delayed implementing the auditor attestation for public companies with less than $75 million in public float (nonaccelerated filers) several times from the original compliance date of April 15, 2005, to June 15, 2010, in response to concerns about compliance costs and management and auditor preparedness. On July 21, 2010, the Dodd-Frank Act permanently exempted nonaccelerated filers from the auditor attestation requirement. The Dodd-Frank Act did not exempt nonaccelerated filers from Section 404(a) of the Sarbanes-Oxley Act (management’s assessment of internal controls). See table 1 for final compliance dates for internal control over financial reporting by issuer filer status. The number of exempt companies exceeded the number of nonexempt companies in each year from 2005 through 2011 (see table 2). According to our analysis of Audit Analytics data, the number of exempt companies fluctuated and ultimately declined from 6,333 in 2005 to 5,459 in 2011 (13.8 percent during that period). The number of nonexempt companies also fluctuated and ultimately declined from 4,256 in 2005 to 3,671 in 2011(13.7 percent). SEC and PCAOB have issued regulations, standards, and guidance to implement the Sarbanes-Oxley Act. In 2007, in response to companies’ concerns about implementation costs, SEC provided implementation guidance to company management, and PCAOB issued a new auditing standard to external auditors to make the internal controls audit process more efficient and more cost-effective. SEC’s guidance for management in implementing Section 404(a) of Sarbanes-Oxley Act and PCAOB’s Auditing Standard No. 5 for external auditors in implementing Section 404(b) of Sarbanes-Oxley Act endorsed a “top-down, risk-based approach” that emphasizes preventing or detecting material misstatements in financial statements by focusing on those risks that are more likely to contribute to such misstatements. These changes were provided to create a more flexible environment where company management and external auditors can scale their internal controls evaluation based on the particular characteristics of a company to reduce costs and to align SEC and PCAOB requirements for evaluating the effectiveness of internal controls. Both SEC regulations and PCAOB Auditing Standard No. 5 state that management is required to base its assessment of the effectiveness of the company’s internal control over financial reporting on a suitable, recognized control framework established by a body of experts that followed due process procedures. Both the SEC guidance and PCAOB’s auditing standard cite the Committee of Sponsoring Organizations of the Treadway Commission (COSO) framework as an example of a suitable framework for purposes of Section 404 compliance. In 1992, COSO issued its “Internal Control—Integrated Framework” (the COSO framework) to help businesses and other entities assess and enhance their internal controls. Since that time, the COSO framework has been recognized by regulatory standard setters and others as a comprehensive framework for evaluating internal control, including internal control over financial reporting. The framework consists of five interrelated components: control environment, risk assessment, control activities, information and communication, and monitoring.PCAOB do not mandate the use of any particular framework. Since the implementation of the Sarbanes-Oxley Act, the number and percentage of exempt companies restating their financial statements has generally exceeded the number and percentage of nonexempt companies restating. However, from 2005 through 2011, restatements by exempt companies were generally proportionate to their percentage of our total population. Specifically, on average, almost 64 percent of companies restating were exempt companies and exempt companies made up, on average, 60 percent of our total population. Exempt and nonexempt companies restated their financial statements for similar reasons, and the majority of these restatements produced a negative effect on the companies’ financial statements. The number of financial statement restatements by exempt and nonexempt companies has generally declined since 2005. As illustrated in figure 1, the number of financial restatements peaked in 2006 for exempt companies and declined gradually until 2011, despite a slight uptick in 2010. The number of restatements peaked in 2005 for nonexempt companies, declined gradually until 2009, and then trended upward for the remaining 2 years of the review period. As we have previously reported, some industry observers noted the financial reporting requirements of the Sarbanes-Oxley Act and PCAOB inspections may have led to a higher than average number of restatements in 2005 and 2006. A 2010 Audit Analytics report noted that some observers attributed the subsequent decline in restatements to a belief that SEC relaxed standards in 2008 relating to materiality of errors and the need to file restatements.companies exceeded the number of financial restatements by nonexempt companies each year from 2005 through 2011. However, although the overall number of financial restatements from 2009 through 2011 remained lower than the prior period, the number of financial restatements by nonexempt companies increased about 23 percent from The number of financial restatements by exempt 2010 through 2011. The number of financial restatements by exempt companies declined almost 8 percent during the same period. SEC officials and one market expert with whom we spoke indicated that there is no clear explanation for these restatement trends. They also said that a review of each individual financial restatement would be necessary to determine the reasons for the restatement trends, but they offered a few factors to consider when assessing the trends. In particular, a recent Audit Analytics report found that approximately 57 percent of restatements disclosed in 2011 were defined as revision restatements, the highest level since 2005 (the first full year of the disclosure requirement). According to the report, revision restatements generally do not undermine reliance on past financials and are less disruptive to the market. SEC officials noted that although restatements by nonexempt companies have increased, as illustrated in the Audit Analytics report, they may be less severe as a result of higher numbers of revision restatements, fewer issues per restatement, and a lower cumulative impact on the company’s net income. According to our analysis of Audit Analytics data, in 2011, the percentage of restatements that were revision restatements was approximately 62 percent for exempt companies compared to approximately 70 percent for nonexempt companies. SEC officials also suggested that the detection rate of financial restatements could affect restatement trends, especially when looking only at a one or two year period. The officials said that the lag time on detection and the likelihood of detection could be different between exempt and nonexempt companies. Finally, SEC officials said that it is important to consider the nature and severity of restatements. Except for 2005, the percentage of exempt companies restating their financial statements exceeded the percentage of nonexempt companies restating. From 2006 through 2009, there was a decline in the percentage of restatements for both exempt companies and nonexempt companies. The percentage of exempt companies restating their financial statements rose in 2010 to 7.6 percent and remained constant in 2011 (see fig. 2). At the same time, starting in 2010, the percentage of nonexempt companies restating has been on the increase. In addition, from 2005 to 2011, on average, almost 64 percent of companies restating were exempt companies, which made up 60 percent of our total population. Our analysis is generally consistent with a number of studies that have found that exempt companies restate their financial statements at a higher rate than nonexempt companies.having an auditor attest to the effectiveness of a company’s internal control over financial reporting generally reduces the likelihood of financial restatements. For example, in 2009, Audit Analytics found that for companies that did not obtain an auditor attestation and stated that These studies suggest that they had effective internal controls, their financial restatement rate was 46 percent higher than the restatement rate for companies that had obtained an auditor attestation and stated that they had effective internal controls. Exempt companies that voluntarily complied with the auditor attestation requirement constitute a small percentage of exempt companies (see table 3). Prior to the passage of the Dodd-Frank Act in July 2010, the number of exempt companies voluntarily complying with the auditor attestation requirement grew 70 percent from 2008 through 2009. Although SEC deferred the requirement for nonaccelerated filers to comply until June 15, 2010, some exempt companies likely voluntarily complied in anticipation of SEC’s implementation of the requirement. Nonetheless, in 2009 during the peak compliance period for exempt companies that voluntarily complied, 6.9 percent (435) of a total population of 6,285 exempt companies voluntarily complied with the auditor attestation requirement. According to one academic study, exempt companies that voluntarily comply with the auditor’s attestation requirement are more likely than companies that do not comply to have evidence of the superior quality of their internal control over financial reporting and fewer restatements, among other factors. As table 3 also shows, the percentage of financial restatements by exempt companies that voluntarily complied with the requirement is generally lower than that of exempt companies that did not voluntarily comply. From 2005 through 2011, on average, 7.5 percent of exempt companies that voluntarily complied restated their financial statements compared to 8.9 percent of restating exempt companies that did not voluntarily comply. From 2005 through 2011, based on our analysis of Audit Analytics data, the majority of exempt and nonexempt companies that restated their financial statements did so as the result of an accounting rule misapplication. That is, a company revised previously issued public financial information that contained an accounting inaccuracy. To analyze the reasons for financial restatements, we used Audit Analytics’ 69 classifications to classify the type of financial restatements into six categories (see table 4): revenue recognition, core expenses, noncore expenses, reclassifications and disclosures, underlying events, and other. Based on our classification, core expenses (i.e., ongoing operating expenses) were the most frequently identified category of restatement for both exempt and nonexempt companies. Specifically, core expenses accounted for 30.2 percent of disclosures by exempt companies and 28.5 percent of disclosures by nonexempt companies from 2005 through 2011 (see fig. 3). Core expenses include cost of sales, compensation expenses, lease and depreciation costs, selling, general and administrative expenses, and research and development costs. Noncore expenses (i.e., nonoperating expenses) were the second most frequently identified reason for restatement across exempt and nonexempt companies during this period. Each of the other reasons for restatements represented less than 20 percent of all restatements by exempt and nonexempt companies during the period. From 2005 through 2011, the majority of financial restatements by exempt and nonexempt companies negatively impacted the company’s financial statements. Specifically, 87.6 percent of financial restatements by exempt companies resulted in a negative net effect on the financial statements—the income statement, the balance sheet, the statement of cash flows, or the statement of shareholder’s equity—of these companies. Similarly, 80.6 percent of financial restatements by nonexempt companies resulted in a negative net effect on the company’s financial statements. The characteristics of exempt and nonexempt companies with financial restatements varied from 2005 through 2011. For example, in terms of industry characteristics, on average, most exempt companies restating were in the manufacturing sector (29.4 percent), followed by agriculture, construction, and mining (14.6 percent). On average, most of the nonexempt companies restating were in the manufacturing sector (29.3 percent), followed by the financial sector (16.6 percent). Further, in 2011, 91.4 percent of nonexempt companies restating compared to 35.3 percent of exempt companies were listed on an exchange. In addition, nonexempt companies had an average financial restatement period that was longer than that of exempt companies. Specifically, from 2005 through 2011, nonexempt companies had an average financial restatement period of 9 quarters compared to an average financial restatement period of almost 6 quarters for exempt companies. Companies and others identified various costs of the auditor attestation requirement. A number of studies and surveys show that since the passage of the Sarbanes-Oxley Act, and especially since the 2007 reforms by SEC and PCAOB, audit costs have declined for companies of all sizes. These studies and surveys also show that these costs, as a percentage of revenues, affect smaller companies disproportionately compared to their larger counterparts. Companies and others also identified benefits of compliance, including stronger internal controls and more transparent and reliable financial reports. However, determining whether auditor attestation compliance costs outweigh the benefits is difficult because many costs and benefits cannot be readily quantified. A number of studies and surveys show that the estimated costs of obtaining an external auditor attestation on internal control over financial reporting are significant for companies of all sizes. Obtaining an auditor attestation incurs both direct and indirect costs, according to one study. Direct costs are expenses incurred to fulfill the auditor attestation requirement, such as the audit fees, external fees paid to outside contractors and vendors that help companies comply with the requirement, salaries of internal staff for hours spent preparing for auditor attestation compliance, and nonlabor expenses (e.g., technology, software, travel, and computers related to compliance). Indirect costs are those costs not directly linked to obtaining the auditor attestation. Two examples of indirect costs cited by one interviewee and one study are the time spent by management in preparing for and addressing auditors’ inquiries, which diverts their attention from strategic planning, and the diversion of funds from capital investments to auditor attestation-related expenses. Audit fees are a significant direct cost of the auditor attestation requirement. Sarbanes-Oxley Act and PCAOB standards require that the financial statement audit and the auditor attestation audit be conducted on an integrated basis. As a result, the auditor attestation is included in the total audit fees—that is, the total amount companies pay to their external auditors to conduct the integrated audit. Audit fees are based on several factors, including but not limited to the scope of an audit, which is a function of a company’s complexity and risk; the total effort required by the external auditor to complete the audit; and the risk associated with performing the audit. However, according to SEC’s 2011 study and one interviewee, the costs incurred by a company to comply with the auditor attestation requirement generally decline after the initial year. We analyzed total audit fees as a percentage of revenues from 2005 through 2011 for exempt and nonexempt companies. We found that exempt companies, which tend to be smaller, had higher average total audit costs, measured as a percentage of revenues, compared to nonexempt companies (see table 5). Among exempt companies, the data indicate that exempt companies that do not voluntarily comply with the auditor attestation requirement have (except for 2006) higher average total audit fees as a percentage of revenues than the exempt companies that voluntarily comply. While two academics we contacted about this trend could not provide a definitive explanation, there are many factors beside company size that can affect audit fees. Our data analysis results are consistent with our previous work on audit fees. Specifically, in 2006, we reported that smaller public companies paid disproportionately higher audit fees compared to larger public companies. Smaller public companies noted that they incur higher audit fees and other costs, such as hiring more staff or paying outside consultants to comply with the internal control provisions of the Sarbanes- Oxley Act. One study noted that historically, these higher audit fees and other costs increased regulatory costs for smaller public companies because regulatory compliance, in general, involves a significant number of fixed costs regardless of the size of a company. Thus, smaller companies with lower revenues are forced to bear these fixed costs over a smaller revenue base compared to larger companies. However, the auditor attestation is one element of the total audit fees. To gauge the amount spent on the auditor attestation, we asked respondents to our survey to provide us with the amount of total audit fees and the approximate amount attributable to complying with the auditor attestation requirement. Based on our survey results, we estimate that all companies with a market capitalization of less than $10 billion that obtained an auditor attestation in 2012 spent, on average, about $350,000 for auditor attestation fees, representing about 29 percent of their average total audit fees. Although these costs remain significant for many companies, the cost of implementing the auditor attestation provision has been declining and varies by company size. For example, SEC’s 2009 study on internal control over financial reporting found that, among other things, the mean auditor attestation costs declined from about $821,000 to about $584,000 (approximately 29 percent) pre- and –post 2007 reforms for all companies that obtained an auditor attestation. Median costs declined from about $358,000 to $275,000 (approximately 23 percent) pre- and –post 2007 reforms. According to the study and an academic we interviewed, costs have been declining for a variety of reasons, including companies and auditors gaining experience in the auditor attestation environment and the 2007 SEC and PCAOB guidance. The academic further stated that in the early years of implementation of Section 404(b), initial costs were high for all companies, in part, because they had not previously implemented effective internal controls. There are two types of potential benefits or positive impacts—direct and indirect—that companies can receive from complying with the auditor attestation requirement according to one study. Direct benefits are those directly related to improvements in the company’s financial reporting process, such as the quality of the internal control structure, the audit committee’s confidence in the internal control structure, the quality of financial reporting, and the company’s ability to prevent and detect fraud. Indirect benefits are other dimensions that may be affected by changes in the quality of the financial reporting process, such as a company’s ability to raise capital, the liquidity of the common stock, and the confidence investors and other users of financial statements may have in the company. Respondents to our survey identified a number of benefits or positive impacts stemming from compliance with the auditor attestation requirement, although fewer of them perceived indirect benefits compared to direct benefits. Many survey respondents noted that they experienced a number of direct benefits. For example, we estimate that: 80 percent of all companies view the quality of their company’s internal control structure as benefiting from the auditor attestation; 73 percent view their audit committee’s confidence in internal control over financial reporting as benefiting from the auditor attestation; 53 percent view their financial reporting as benefiting from the 46 percent view their ability to prevent and detect fraud as benefiting from the auditor attestation (see table 6). Our findings are consistent with other surveys. In particular, Protiviti’s 2013 survey found that, among other things, 80 percent of respondents reported that their company’s internal control over financial reporting structure had improved since they began complying with the auditor attestation requirement.improved confidence in the financial reports of other Section 404(b) compliant companies, fewer companies’ perceived indirect benefits of the requirement. Specifically, based on our survey results, no more than 30 percent of all companies with less than $10 billion in market capitalization perceived any of the identified indirect benefits (see table 6) as stemming from the auditor attestation requirement. Research suggests that auditor attestation generally has a positive effect on investor confidence. Although exempt companies are currently not required to disclose whether they voluntarily complied with the auditor attestation requirement in their annual reports, doing so would provide investors with important information that may influence their investment decisions. Recent empirical studies we reviewed found that auditor attestation of internal controls generally has a positive impact on investor confidence. Investor confidence is considered an indirect benefit to companies that comply with the auditor attestation requirement. Specifically, an auditor attestation of internal controls helps to reduce information asymmetries between a company’s management and investors. With increased transparency and better financial reporting due to reliable third-party attestation, investors face a lower risk of losses from fraud. This lowered risk has a number of positive consequences for companies, such as enabling them to pay less for the capital as more confident investors require a lower rate of return on their money. Because investor confidence is difficult to measure directly, empirical research has examined the impact of auditor attestation on other variables that are considered proxies for investor confidence, including the cost of equity and debt capital, stock performance, and liquidity. As described below, such research has found that the auditor attestation increases investor confidence. A 2012 study examined exempt and nonexempt companies with market capitalization between $25 million and $125 million. This study found that the market value of equity—as measured by the common stock price—is positively associated with the book value of equity—which is an element in financial statements—but that this relationship is stronger for nonexempt companies. In other words, investors appear to put greater trust on the book value of equity of companies that are subject to auditor attestation compared to those companies that are not. As a result, book value is more likely to have a positive effect on market value if the auditor attestation is present. These results are consistent with the notion that the auditor attestation provides useful and relevant information to investors. A 2013 study found that exempt companies that voluntarily comply with the auditor attestation enjoy a lower cost of capital. Specifically, both the cost of equity and the cost of debt are significantly lower for companies that voluntarily comply with the requirement compared to those exempt companies that do not. C. A. Cassell, L.A. Myers, and J. Zhou, “The Effects of Voluntary Internal Control Audits on the Cost of Capital,” Working paper, (Feb. 13, 2013). 75 million. The study found a negative market response to the exemption but less so for those companies that voluntarily complied before 2009. It also found that to reduce information asymmetry, companies that voluntarily comply use their compliance as a signal to the marketplace of the superior quality of their financial reporting—a signal that is credible because it is costly and difficult to imitate by companies with weak internal controls. Also, companies that voluntarily complied with auditor attestation had significant increases in liquidity. Other research supports the view that auditor attestation of internal control effectiveness matters for investors and other market participants insofar as adverse auditor reports have negative consequences for companies. Such consequences include higher cost of debt (and possibly higher cost of equity), lower probability that lenders will extend lines of credit, stricter loan terms, and unfavorable stock recommendations. While most research findings we reviewed suggest auditor attestation provides valuable information to investors and has a positive effect on confidence, a 2011 study questions the value of the auditor attestation for small companies. Looking at exempt and small nonexempt companies with market capitalization of $300 million or less, the study finds that small companies that became nonexempt, and therefore subject to the auditor attestation requirement, in 2004 experienced a statistically significant increase in their material weakness disclosure rate, but companies that remained exempt saw similar increases through their management reports under Section 404(a) of the Sarbanes-Oxley Act. The results suggest that auditor attestation provides little additional information to investors in terms of detecting material weaknesses because there is no statistically significant difference in the rate of disclosure of material weakness between the two types of companies. The majority of academics and market participants we interviewed suggest that having auditor attestation positively impacts investor confidence. Specifically, they told us that the involvement of auditors in attesting to the effectiveness of internal controls improves the reliability of the financial reporting and serves to protect investors. As a result, they said, the exemption granted to small companies is likely to reduce investor confidence because these companies already have greater informational asymmetry. They said that according to academic and other studies, small companies are also more likely than large ones to have serious internal control problems. Furthermore, they commented that management’s report on internal controls alone is often uninformative because management often fails to detect internal control deficiencies or classifies them as less severe than they are. Some market participants also told us that any company accessing capital markets, regardless of size, should be required to comply with the auditor attestation requirement as investors in any company, large or small, are entitled to the same investor protection. Our survey results also indicate that some companies view auditor attestation as contributing to investor confidence, which is similar to findings from others’ studies and surveys. Our survey results show that the majority of respondents are more confident in the financial reports of companies that comply with the auditor attestation requirement than companies that do not. In addition, we estimate that 30 percent of responding nonexempt and exempt companies that voluntarily comply thought that the requirement increased investor confidence in their own company, while 20 percent were not sure and the remaining 50 percent reported no impact. This perspective is consistent with the results from an in-depth 2009 telephone survey SEC conducted of a small group of financial statement users—such as lenders, securities analysts, credit rating agencies, and other investors—regarding their views on the benefits of auditor attestation. These SEC survey respondents indicated that the auditor’s attestation report provides additional benefits to users and other investors beyond the management’s report under Section 404(a) and that the requirement generally has a positive impact on their confidence in companies’ financial reports. Moreover, in response to a 2010 Center for Audit Quality (CAQ) survey of individual investors, almost two-thirds of investors said they were concerned about exempting companies with annual revenues of under $75 million from the independent auditor attestation requirement, suggesting that the requirement has a positive effect on individual investors’ confidence in the financial information generated by smaller companies. Similarly, in a 2012 survey of investors conducted by the PCAOB Investor Advisory Group on the role, relevance, and value of the audit, over 60 percent of respondents said that the auditor’s opinion on the effectiveness of internal controls is critical in making investment decisions. Further, in a 2012 survey of individual investors by CAQ, 70 percent of the respondents identified independent audits in general as the most effective means of protecting their interests. Explicit disclosure of auditor attestation status in exempt companies’ annual reports could quickly provide investors useful information that may influence their investment decisions. Currently, exempt companies are not required to disclose in their annual reports whether they have voluntarily obtained an auditor attestation on their internal controls. From 2005 through 2010, SEC granted small public companies multiple extensions from having to comply with the auditor attestation requirement. During this time of forbearance, SEC required exempt companies to include a general statement in their annual report that the company was not required to comply with the auditor attestation requirement because of SEC’s grant of temporary exemption status. According to SEC officials, the statement served to provide investors who may have been looking for the attestation an explanation of its absence. SEC granted its final temporary exemption to take effect on June 15, 2010, prior to the passage of the Dodd-Frank Act. SEC did not require exempt companies to include the disclosure statement when implementing the provision of the Dodd-Frank Act that created the permanent exemption. SEC officials said that it is not common for the agency to require a company to disclose compliance status for requirements that are not applicable to the company—which, according to SEC officials, could potentially influence a company’s behavior. Further, SEC officials noted that information on the company’s filing status—and, therefore exemption status—can be found in the company’s annual reports and other documents, which are available to all investors.stated that such information allows investors to determine whether an attestation has been obtained. However, while this information is available, a company’s attestation status is not readily apparent without some knowledge or interpretation of the current reporting requirements. As noted earlier, SEC has previously required companies to provide additional clarity on their compliance with the auditor attestation requirement. Thus, requiring companies to explicitly disclose their auditor attestation status would be consistent with its past action. Further, federal securities laws require public companies to disclose relevant information to investors to aid them in their investment decisions. Many market participants we interviewed consider the external auditor’s assessment of the effectiveness of a company’s internal control over financial reporting to be important information for investors. Thus, many market participants we interviewed and companies we surveyed noted that exempt companies should be required to explicitly disclose whether or not they obtained an auditor attestation to make the information more transparent for investors. In particular, according to the results of our survey, we estimate that 57 percent of all companies with less than $10 billion in market capitalization are in favor of requiring exempt companies to disclose whether they have voluntarily obtained an auditor attestation. A representative from one company said “I believe there is an assumption that SEC-listed companies are in compliance with 404. If companies are not, they should disclose such.” A representative from another company said that “If investors value the independent audit, then they should be made aware of situations where such audit has not been performed. Investors should not have to interpret the regulations to know if the audit is required.” Some companies we surveyed that were not in favor of such disclosure generally believed that investors can get the information from the audit opinion in the annual report. As of year-end 2011, approximately 300 exempt companies had voluntarily complied with the auditor attestation requirement. Although information on voluntary compliance with the auditor attestation requirement is determinable, having the information explicitly disclosed could benefit investors. Such disclosure would increase transparency and investor protection by making investors more aware of this important investment information. Investors need accurate financial information with which to make informed investment decisions, and effective internal controls are necessary for accurate and reliable financial reporting. The attestation requirement is part of legislation aimed at helping to protect investors by, among other things, improving the quality of corporate financial reporting and disclosures. Perceptions of the costs and benefits of auditor attestation continue to vary among companies and others, but among other benefits, obtaining auditor attestation appears to have a positive impact on investor confidence. In addition, our analysis found that companies (both exempt and nonexempt) that obtained an auditor attestation generally had fewer financial restatements than those that did not, which suggests that knowing whether a company has obtained the auditor attestation may be useful for investors in gauging the reliability of a company’s financial reporting. However, because SEC regulations currently do not require explicit statements regarding the voluntary attainment of auditor attestation, investors may have to interpret reporting requirements and filings to determine whether exempt companies have obtained an auditor attestation. Previously, when certain companies were temporarily exempt from the auditor attestation requirement, SEC required explicit disclosure of exemption status in companies’ annual reports. However, SEC eliminated this requirement in 2010 when companies of certain sizes were permanently exempted. Federal securities laws require public companies to disclose relevant information to investors to aid them in their investment decisions. Although information on a company’s exempt status is available to investors, explicit disclosure would increase transparency and investor protection by making investors readily aware of whether a company has obtained an auditor attestation on internal controls. The disclosure could serve as an important indicator of the reliability of a company’s financial reporting, which may influence investors’ decisions. To enhance transparency and investor protection, we recommend that SEC consider requiring public companies, where applicable, to explicitly disclose whether they obtained an auditor attestation of their internal controls. We provided a draft of the report to the SEC Chairman for her review and comment. SEC provided written comments that are summarized below and reprinted in appendix II. We also provided a draft of the report to PCAOB and relevant excerpts of the draft report to Audit Analytics for technical review. We received technical comments from SEC, PCAOB, and Audit Analytics that were incorporated as appropriate. In its written comments, SEC did not comment on our recommendation that it consider requiring public companies to explicitly disclose whether they have obtained an internal control attestation. Rather, SEC confirmed, as described in the draft report, that a nonaccelerated filer (referred to as an exempt company in our report) does not have to explicitly disclose whether it obtained an auditor attestation report on its internal controls in its annual report. However, SEC stated that this fact can be easily determined by investors from information that is already disclosed in the annual report. In addition, SEC stated that investors can also find information regarding the existence of an opinion on internal controls by looking at the audit report in the company’s filing. SEC also noted that PCAOB standards permit an auditor that is not engaged to opine on internal controls to include a statement in its report on the financial statements indicating that it is not opining on the internal controls. In our report, we acknowledge that information needed to determine a company’s auditor attestation status is available. However, because an explicit statement on the company’s status is not required, investors must deduce the company’s status from the available information. Explicit disclosure could significantly decrease the potential for investors to misinterpret the information regarding a company’s audit attestation status. Such disclosure would increase transparency and investor protection by making investors readily aware of this important investment information. We therefore maintain that the disclosure warrants further consideration by SEC. We are sending copies of this report to appropriate congressional committees, SEC, PCAOB, Audit Analytics and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report discusses: (1) how the number of financial statement restatements compares between exempt and nonexempt companies; (2) the costs and benefits for nonexempt companies as well as exempt companies that voluntarily comply with the auditor attestation requirement; and (3) what is known about the extent to which investor confidence in the integrity of financial statements is affected by whether or not companies comply with the auditor attestation requirement. We define exempt companies as those with less than $75 million in public float (nonaccelerated filers) and nonexempt companies as those with $75 million or more in public float (accelerated filers). For the purposes of this report, we define exempt companies as those with less than $75 million in public float (nonaccelerated filers) and nonexempt companies as those with $75 million or more in public float (accelerated filers). To address all three objectives, we reviewed and analyzed information from a variety of sources, including the Sarbanes-Oxley Act of 2002 (Sarbanes-Oxley Act), the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), relevant regulatory press releases and related public comment letters, and available research studies. We also interviewed officials from the Securities and Exchange Commission (SEC) and the Public Company Accounting Oversight Board (PCAOB), and we interviewed chief financial officers of small public companies, representatives of relevant trade associations (representing individual and institutional investors, accounting companies, financial analysts and investment professionals, and financial executives), a large pension fund, a credit rating agency, academics knowledgeable about accounting issues, and industry experts. To determine the number of financial statement restatements (referred to as financial restatements) and trends, we analyzed data from the Audit Analytics database from 2005 through 2011. We used the Audit Analytics’ Auditor Opinion database to generate the population of exempt and nonexempt companies in each year from 2005 through 2011. Our analysis does not include 2012 data because 2012 small-company data was incomplete. According to Audit Analytics, the incomplete data was often due to the fact that small companies had not yet filed the relevant information with SEC. The sample we used to produce the population of exempt and nonexempt companies does not include subsidiaries of a public company, registered investment companies, or asset-backed securities issuers. Once we excluded these companies from the entire population, we grouped the remaining companies based on their filing status (i.e., nonaccelerated filer, smaller reporting company, accelerated filer, large accelerated filer, and filers that did not disclose their filing status). Exempt companies are nonaccelerated filers, including smaller reporting companies. For our purposes, we grouped companies that did not disclose their filing status but whose market capitalization was less than $75 million with exempt companies. We also identified for each year from 2005 through 2011 exempt companies that voluntarily complied with the integrated audit requirement as indicated in the data. Nonexempt companies are accelerated filers and large accelerated filers. For our purposes, we grouped companies that did not disclose their filing status but whose market capitalization was equal to or greater than $75 million with nonexempt companies. We excluded companies that did not disclose their filing status and did not have a reported market capitalization. We then used Audit Analytics’ Restatement database, which contains company information (e.g., assets, revenues, restatements, market capitalization, location, and industry classification code) to identify the number of financial restatements from 2005 through 2011 based on our population of exempt companies, exempt companies that voluntarily complied, and nonexempt companies. Using this database, we identified 6,436 financial restatements by 4,536 public companies, 2,834 of which were exempt companies. We used Audit Analytics’ 69 classifications to classify the type of financial restatements into six categories: core expenses (i.e., ongoing operating expenses), noncore expenses (i.e., nonoperating or nonrecurring expenses), revenue recognition (i.e., improperly record revenues), reclassifications and disclosures, underlying events (i.e., accounting for mergers and acquisitions), and other. The majority of restatements we classified were the result of an accounting rule misapplication. To identify audit costs of compliance, we analyzed data from Audit Analytics’ Auditor Opinion database, which contains auditors’ report information such as audit fees, nonaudit fees, auditor name, audit opinions, revenues, and company size, among other information from 2005 through 2011. Our analyses of audit costs do not include 2012 data because 2012 small-company data was incomplete. The incomplete data was often due to the fact that small companies had not yet filed the relevant information with SEC. We tested a sample of the Audit Analytics database information and found it to be reliable for our purposes. For example, we cross-checked random samples from each of Audit Analytics’ databases with information on financial restatements, filing status, and internal controls from SEC’s Electronic Data Gathering, Analysis, and Retrieval system. We also spoke with other users of Audit Analytics data as well as Audit Analytics officials. In addition, we reviewed relevant research studies and papers on the impact of compliance with the internal control audits on financial restatements. We consider the information to be reliable for our purpose of determining financial statement restatement trends and audit fee calculations. To examine the characteristics of publicly traded companies that complied, either voluntarily or because required, with the requirement to obtain an independent auditor attestation of their internal controls, we conducted a web-based survey of companies that had either voluntarily complied or were required to comply with the integrated audit requirement in any year between 2004 and 2011. Based on a list of publicly traded companies obtained from Audit Analytics, we identified 4,053 companies that had either voluntarily complied with the integrated audit requirement in any year from 2004 through 2011 or that were required to comply in 2011 as determined by their filing status. We stratified the population into three strata by first identifying the nonaccelerated voluntary filers. These are companies that voluntarily complied with the integrated audit requirement in any year from 2004 through 2011. Since our primary focus was on the nonaccelerated voluntary filers, we selected all 392 of these companies. From the remaining companies in the population, we created two additional strata based on 2011 filing status, and we took a random sample of companies from the remaining strata. The sample sizes for the remaining strata were determined to produce a proportion estimate within each stratum that would achieve a precision of plus or minus 10 percentage points or less, at the 95 percent confidence level. Finally, we increased the sample size based on the expected response rate of 40 percent. We submitted our survey to a total of 850 companies from the original population of 4,053. We identified 104 companies in our sample that were closed, merged with another company, or improperly included in the sampling frame. We received valid responses from 195 out of the remaining 746 sampled companies (see table 7). The weighted response rate, which accounts for the differential sampling fractions within strata, is 25 percent. We conducted this survey in a web-based format. The questionnaire was designed by a GAO survey specialist in collaboration with GAO staff with subject-matter expertise. The questionnaire was also reviewed by experts at SEC. We pretested drafts of our questionnaire with three public companies of different sizes to ensure that the questions and response categories were clear, that terminology was used correctly, and that the questions did not place an undue burden on the respondents. The pretests were conducted by telephone with company financial executives in Iowa, Virginia, and Washington, D.C. Pretests included GAO methodologists and GAO subject-matter experts. Based on the feedback received from the pretests, we made changes to the content and format of some survey questions. We directed our survey to the chief executive officer, chief financial officer, or chief accounting officer, whose names and email addresses we obtained from Nexis. We activated our web- based survey on December 17, 2012, and closed the survey on February 19, 2013. We sent follow-up emails on three occasions to remind respondents to complete the survey and conducted telephone follow-ups to increase the response rate. Because our survey was based on a random sample of the population, it is subject to sampling errors. In addition, the practical difficulties of conducting any survey may introduce nonsampling errors. For example, difference in how a particular question is interpreted or the sources of information available to respondents may introduce errors. We took steps, such as those described above, to minimize such nonsampling errors in the development of the questionnaire and the data collection and data analysis stages as well. For example, because this was a web-based survey, respondents entered their responses directly into the database, reducing the possibility of data-entry error. Finally, when the data were analyzed, a second independent analyst reviewed all computer programs. We conducted an analysis of our survey results to identify potential sources of nonresponse bias using two methods. First, we examined the response propensity of the sampled companies by several demographic characteristics. These characteristics included market capitalization size categories, region, and sector. Our second method consisted of comparing weighted estimates from respondents and nonrespondents to known population values for total market capitalization. We conducted statistical tests of differences, at the 95 percent confidence level, between estimates and known population values, and between respondents and nonrespondents. We determined that there was significant bias induced by the largest companies (measured by market capitalization) not responding to the survey. In other words, we found that companies with market capitalization over $10 billion were underrepresented in our sample. However, we found no evidence of substantial nonresponse bias based on these characteristics when generalizing to the population of companies with market capitalization less than or equal to $10 billion. Therefore, we adjusted the scope of our survey to include only those companies with market capitalization of less than or equal to $10 billion (see table 8). Because we found no evidence of substantial nonresponse bias when generalizing to the adjusted target population and the weighted response rate of 25 percent, we determined that weighted estimates generated from these survey results are generalizable to the population of in-scope companies. We generated weighted estimates and generalized the results to the estimated in-scope population of 3,432 companies (plus or minus 42 companies). Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report includes the true values in the study population. All percentage estimates presented in this report have a margin of error of plus or minus 15 percentage points or fewer, and all estimates of averages have a relative margin of error of plus or minus 20 percent or less, unless otherwise noted. To obtain information on the impact of obtaining an auditor attestation on a company’s cost of capital, we included questions in our web-based survey to large and small public companies of various industries about this matter, interviewed trade associations, industry experts, a large pension fund, and academics; and reviewed relevant academic and SEC research studies. To examine the extent to which investor confidence in the integrity of financial statements is affected by companies’ compliance with the auditor attestation requirement, we reviewed relevant empirical literature written by academic researchers, as well as recent surveys, studies, reports, and articles by others. To identify these studies, we asked for recommendations from academics, SEC, PCAOB, and representatives of organizations that address issues related to the auditor attestation requirement. We reviewed bibliographies of papers we obtained to identify additional material. In addition, we conducted searches of online databases such as ProQuest and Nexis using keywords to link Section 404(b) of the Sarbanes-Oxley Act with investor confidence. We also conducted interviews with agencies and organizations, as well as academics and other knowledgeable individuals who focus on issues related to investor confidence and the auditor attestation requirement. Moreover, we interviewed small public companies exempt from auditor attestation but who nonetheless complied with the requirement. In addition, we reviewed surveys undertaken by various government agencies and organizations to gauge the impact of the auditor attestation on investor confidence. We conducted a focused review of the research related to Section 404(b) of the Sarbanes-Oxley Act and summarized the recent studies most relevant to our objective. The empirical research discussed may have limitations, such as accuracy of measures and proxies used. We reviewed published works by academic researchers, government agencies, and organizations with expertise in the field. We performed our searches from September 2012 through May 2013. We assessed the reliability of these studies for use as corroborating evidence and found them to be reliable for our purposes. We also included questions in our web-based survey to large and small public companies of various industries about this matter. Lastly, we reviewed relevant federal securities laws, the Securities Act of 1933 and the Securities Exchange Act of 1934. We conducted this performance audit from May 2012 to July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Karen Tremba, (Assistant Director), James Ashley, Bethany Benitez, William Chatlos, Janet Eackloff, Joe Hunter, Cathy Hurley, Stuart Kaufman, Marc Molino, Lauren Nunnally, Jennifer Schwartz, and Seyda Wentworth made key contributions to this report. Alexander, C. R., S. W. Bauguess, G. Bernile, Y. A. Lee, and J. Marietta- Westberg. “The Economic Effects of SOX Section 404 Compliance: A Corporate Insider Perspective.” Working paper. March 2010. Asare, S. K., and A. Wright. “The Effect of Type of Internal Control Report on Users’ Confidence in the Accompanying Financial Statement Audit Report.” Contemporary Accounting Research, vol. 29, no. 1 (2012). Ashbaugh-Skaife, H., D. Collins, W. Kinney, and R. LaFond. “The Effect of Internal Control Deficiencies on Firm Risk and Cost of Equity.” Journal of Accounting Research, vol. 47, no. 1 (2009). Audit Analytics. “2011 Financial Restatements: An Eleven Year Comparison.” Sutton, Mass.: 2012. Audit Analytics, “2009 Financial Restatements: A Nine Year Comparison.” (Sutton, Mass.: February 2010). Audit Analytics. “Restatements Disclosed by the Two Types of SOX 404 Issuers: (1) Auditor Attestation Filers and (2) Management-Only Report Filers.” Sutton, Mass.: November 2009. Brown, K., P. Pacharn, J. Li, E. Mohammad, F. A. Elayan, and F. Chu. “The Valuation Effect and Motivations of Voluntary Compliance with Auditor’s Attestation Under Sarbanes-Oxley Act Section 404 (B).” Working paper. January 15, 2012. Cassell, C.A., L. A. Myers, and J. Zhou. “The Effects of Voluntary Internal Control Audits on the Cost of Capital.” Working paper. February 13, 2013. Chief Financial Officers’ Council and the President’s Council on Integrity and Efficiency, Estimating the Costs and Benefits of Rendering an Opinion on Internal Control over Financial Reporting. Coates IV, J. C. “The Goals and Promise of the Sarbanes-Oxley Act.” Journal of Economic Perspective, vol. 21, no. 1 (2007). Crabtree, A., and J. J. Mahler. “Credit ratings, Cost of Debt, and Internal Control Disclosures: A Comparison of SOX 302 and SOX 404.” The Journal of Applied Business Research, vol. 28, no. 5 (2012). Dhaliwal, D., C. Hogan, R. Trezevant, and M. Wilkins. “Internal Control Disclosures, Monitoring, and the Cost of Debt.” The Accounting Review, vol. 86, no. 4 (2011). GAO. Community Banks and Credit Unions: Impact of the Dodd-Frank Act Depends Largely on Future Rule Makings. GAO-12-881. Washington, D.C.: September 13, 2012. GAO. Financial Restatements: Update of Public Company Trends, Market Impacts, and Regulatory Enforcement Activities. GAO-06-678. Washington, D.C.: March 5, 2007. GAO. Sarbanes-Oxley Act: Consideration of Key Principles Needed in Addressing Implementation for Smaller Public Companies. GAO-06-361. Washington, D.C.: April 13, 2006. Holder, A. D., K. E. Karim, and A. Robin. “Was Dodd-Frank Justified in Exempting Small Firms from Section 404b Compliance?” Accounting Horizons, vol. 27, no. 1 (2013). Iliev, P. “The Effect of SOX Section 404: Costs, Earnings Quality, and Stock Prices.” Journal of Finance, vol. 65, no. 3 (2010). Kim, J. B., B. Y. Song, and L. Zhang. “The Internal Control Weakness and Bank Loan Contracting: Evidence from SOX Section 404 Disclosures.” The Accounting Review, vol. 86, no. 4 (2011). Kinney, W. R., and M. L. Shepardson. “Do Control Effectiveness Disclosures Require SOX 404(b) Internal Control Audits?: A Natural Experiment with Small U.S. Public Companies.” Journal of Accounting Research, vol. 49, no. 2 (2011). Krishnan, G.V., and W. Yu. “Do Small Firms Benefit from Auditor Attestation of Internal Control Effectiveness?” Auditing: A Journal of Practice and Theory, vol. 34, no. 4 (2012). Nagy, A. L. “Section 404 Compliance and Financial Reporting Quality.” Accounting Horizons, vol. 24, no. 3 (2010). Orcutt, J. L. “The Case Against Exempting Smaller Reporting Companies from Sarbanes-Oxley Section 404: Why Market-Based Solutions are Likely to Harm Ordinary Investors.” Fordham Journal of Corporate and Financial Law, vol. 14, no. 2 (2009). Schneider, A., A. Gramling, D. R. Hermanson, and Z. Ye. “A Review of Academic Literature on Internal Control Reporting Under SOX.” Journal of Accounting Literature, vol. 28 (2009). Schneider, A., and B. K. Church. “The Effect of Auditors’ Internal Control Opinions on Loan Decisions.” Journal of Accounting and Public Policy, vol. 27, no.1 (2008). Scholz, Susan. The Changing Nature and Consequences of Public Company Financial Restatements: 1997-2006. A special report prepared at the request of the Department of the Treasury. April 2008. U.S. Securities and Exchange Commission. Study and Recommendations on Section 404(b) of the Sarbanes-Oxley Act of 2002 For Issuers with Public Float Between $75 and $250 Million. Washington, D.C.: 2011. U.S. Securities and Exchange Commission. Study of the Sarbanes-Oxley Act of 2002 Section 404 Internal Control over Financial Reporting Requirements. Washington, D.C.: 2009. Center for Audit Quality. The CAQ’s Sixth Annual Main Street Investor Survey, September 2012. Center for Audit Quality. The CAQ’s Fourth Annual Individual Investor, September 2010. Financial Executives International and Financial Executives Research Foundation, 2012 Audit Fee Survey. Morristown, N.J.: 2012. Financial Executives International and Financial Executives Research Foundation, Special Survey on Sarbanes-Oxley Section 404 Implementation. Morristown, N.J.: 2005. PCAOB. 2012 SOX Compliance Survey: Role, Relevancy and Value of the Audit. 2012. Protiviti, 2013 Sarbanes-Oxley Compliance Survey: Building Value in Your SOX Compliance Program. 2013. Protiviti, 2012 Sarbanes-Oxley Compliance Survey: Where U.S.-Listed Companies Stand – Reviewing Cost, Time, Effort and Process. 2012.
|
Section 404(b) of the Sarbanes-Oxley Act requires a public company to have its independent auditor attest to and report on management's internal control over financial reporting; this is known as the auditor attestation requirement. In July 2010, the Dodd-Frank Wall Street Reform and Consumer Protection Act exempted companies with less than $75 million in public float from the auditor attestation requirement. The act mandated that GAO examine the impact of the permanent exemption on the quality of financial reporting by small public companies and on investors. This report discusses (1) how the number of financial statement restatements compares between exempt and nonexempt companies (i.e., those with $75 million or more in public float), (2) the costs and benefits of complying with the attestation requirement, and (3) what is known about the extent to which investor confidence is affected by compliance with the auditor attestation requirement. GAO analyzed financial restatements and audit fees data; surveyed 746 public companies with a response rate of 25 percent; interviewed regulatory officials and others; and reviewed laws, surveys, and studies. Since the implementation of the auditor attestation requirement of the Sarbanes-Oxley Act of 2002 (Sarbanes-Oxley Act), companies exempt from the requirement have had more financial restatements (a company's revision of publicly reported financial information) than nonexempt companies, and the percentage of exempt companies restating generally has exceeded that of nonexempt companies. Exempt and nonexempt companies restated their financial statements for similar reasons (e.g., revenue recognition and expenses), and the majority of these restatements produced a negative effect on the companies' financial statements. Views on the costs and benefits of auditor attestation vary among companies and others. Although companies and others reported that the costs associated with compliance can be significant, especially for smaller companies, GAO's and others' analyses show that these costs have declined for companies of all sizes since 2004. Companies and others reported benefits of compliance, such as improved internal controls and reliability of financial reports. However, measuring whether auditor attestation compliance costs outweigh the benefits is difficult and views among companies and others were mixed as to whether the costs exceeded the benefits of compliance. A majority of empirical studies GAO reviewed suggest that compliance with the auditor attestation requirement has a positive impact on investor confidence in the quality of financial reports. Some interviewees said the independent scrutiny of a company's internal controls is an important investor protection safeguard. The Securities and Exchange Commission (SEC) does not require exempt companies to disclose in their annual report whether they voluntarily obtained an auditor attestation. SEC officials said it is not common for SEC to require a company to disclose voluntary compliance with requirements from which it is exempt. However, federal securities laws require companies to disclose relevant information to investors to aid in their investment decisions. Although information on auditor attestation status is available to investors, requiring a company to explicitly state whether it has obtained an auditor attestation on internal controls could increase transparency and investor protection. GAO recommends that SEC consider requiring public companies, where applicable, to explicitly disclose whether they obtained an auditor attestation of their internal controls. SEC responded that investors could determine attestation status from available information. But without clear disclosure, investors may misinterpret a company's status; therefore, this warrants SEC's further consideration.
|
Federal agencies rely on a mix of public and private sector sources to perform a wide variety of commercial activities, such as information technology, building maintenance, property management, and logistics. Competitive sourcing is the term used to describe the strategy under which agencies use competitions between public and private sector organizations to identify the most cost-effective provider of commercial activities. In 2001 and 2002, the Comptroller General convened a Commercial Activities Panel to study the policies and procedures governing competitive sourcing. This panel included officials from federal agencies, federal labor unions, and private industry. The panel unanimously approved a set of 10 principles (see sidebar), and a supermajority of two-thirds of the panel members adopted an additional set of recommendations that they believed would significantly improve the government’s policies and procedures for making competitive sourcing decisions. tht me performed y either the public or the privte ector, wold permit public nd privte rce to prticipte in competition for work crrently performed in-house, work crrently contrcted to the privte ector, nd new work, content with thee principle. dollars by taking advantage of competitive forces. Further, in July 2008, OMB issued a memorandum on commercial services management recognizing that agencies should improve the operation of their commercial functions using a variety of techniques—such as business process re-engineering efforts and strengthened oversight of contractors— in addition to competitive sourcing. Inherently governmental activities Fnction thre o intimtely relted to the public interet tht they reqire performnce y federl government employee. Thee fnction normlly fll into two ctegorie: the exercie of overeign government authority or the eablihment of procedre nd process relted to the overight of monetry trsaction or entitlement. The first step in the competitive sourcing process is for agencies to determine which activities are suitable for competition. In accordance with the Federal Activities Inventory Reform Act of 1998 (FAIR Act) and OMB Circular No. A-76, federal agencies categorize all of the activities performed by their employees as either inherently governmental (not subject to competitive sourcing) or commercial (potentially subject to competitive sourcing). OMB Circular No. A-76 then directs agencies to further categorize their commercial activities according to six “reason codes,” with only one code signifying suitability for competitive sourcing that year. Agencies are allowed considerable discretion in how they categorize their activities, subject to review by OMB. Once the annual inventory is complete, agencies then select which activities will be competed and begin planning the associated competitions. In this stage, agencies separate the selected activities into groups and develop a full description of each group—called a “statement of work”—that will serve as a guide to potential bidders on what will be required by the final contracts or letters of obligation. Agencies also develop quality assurance plans and cost estimates to be used as standards against which to evaluate the performance of the winning service provider and the cost savings achieved by the competition. agencies also develop an in-house bid, or “tender,” under which agency employees will perform the work if the in-house government bid wins the competition. The staffing plan identified in the in-house agency bid is referred to as a “most efficient organization” (MEO). The MEO is not usually a representation of the incumbent organizational structure “as is,” but more commonly, it reflects a smaller, restructured version of the incumbent government organization doing the work. A streamlined competition implified competition process tht mused with ctivitie of 65 or fewer FTE, reqiring less ly nd docmenttion th ndrd competition. In-housid for tremlined competitionbased on the incent “as” orgniztion, bugencie re encoged to develop more efficient orgniztion. Generally, the lowest cost provider that is technically acceptable is awarded the contract, but factors other than cost may be considered in some circumstances. If a contractor (private sector service provider) wins the competition, certain federal worker protections are required, such as the right to “first refusal” in which the private sector service provider winning the competition generally must first offer any new employment openings under the contract to qualified government employees who were (or who will be) adversely affected as a result of the awarding of the contract. If the in-house government service provider wins the competition, other federal worker protections apply, such as those governing grade and salary retention rights. competitive process to used when more thn 65 FTE re involved (but mused when fewer thn 65 re involved). Bid re reqired to inclde quality control pl, nd gencie re reqired to develop pl to measure the winning ervice provider’ performnce. In-housid for ndrd competition re to inclde mot efficient orgniztion (MEO) nd more detiled ly nd docmenttion thfor tremlined competition. Once the competition is complete and the letter of obligation or contract is awarded, agencies are required to monitor the performance of the winning service provider on an ongoing basis and must report findings to both Congress and OMB, regardless of whether the winner is the in-house government service provider or a private sector service provider. For example, federal law requires agencies to submit annual reports to Congress on competitions announced and completed. In addition, OMB Circular No. A-76 and other guidance directs agencies to monitor postcompetition performance of the winning service provider and to track the actual costs of the performance. (See appendix III for more details.) Within DOL, the Office of Asset and Resource Management is responsible for planning and conducting the FAIR Act inventories of commercial and inherently governmental activities. It is also responsible for managing DOL’s competitive sourcing program, including the planning, monitoring, and evaluation of potential opportunities to improve effective and efficient program delivery at DOL. For example, this office coordinates the PCAR for each competition. According to DOL policy and procedures, an initial PCAR is normally conducted by an independent review official after the first full year of performance following a competition, with annual PCARs thereafter for the duration of the contract, in order to meet formal review and inspection requirements in OMB Circular No. A-76 and the Federal Acquisition Regulation. The competition process at DOL is illustrated in figure 1. Beginning in fiscal year 2004, DOL’s strategy for identifying and selecting work activities for competitive sourcing competitions involved starting out small in scope and gradually expanding its efforts over time. DOL’s first competitions in fiscal year 2004 involved mostly small groups of FTEs within a single DOL office (see table 1). By fiscal year 2007, DOL had expanded its public-private competitions to include functions involving a greater number of FTEs across multiple DOL offices. In addition, the number of private sector bids decreased over time. For a complete listing of DOL’s fiscal year 2004 through 2007 competitive sourcing competitions, see appendix V. DOL has made progress developing a system to assess the performance of winning service providers in its competitive sourcing program. DOL’s system, as outlined in its policy and procedures issued in 2005, directs DOL offices to ensure that (1) records are maintained for independent review of the competition, (2) all assessments contain criteria to measure performance, and (3) lessons learned are reported. In our review of all assessments conducted as of July 2008, we found that these policies and procedures generally were followed and that these assessments provide key information for DOL policymakers to evaluate the effectiveness of its competitive sourcing program. However, we found that DOL lacks a departmentwide process for tracking and addressing deficiencies and recommendations for improvement. In 2005, DOL issued policy and procedures for conducting PCARs—DOL’s system for monitoring performance in accordance with OMB Circular No. A-76 and the Federal Acquisition Regulation. We examined all of DOL’s initial PCARs completed as of July 2008 (18 total), and we found that DOL’s policy and procedures generally were followed in conducting the reviews. Most initial PCARs were completed in a timely manner and most records were maintained for review. Criteria to measure performance were established for half of the competitions, and the majority of initial PCARs included lessons learned. According to DOL policy and procedures and DOL officials, initial PCARs are normally conducted approximately 1 year after the first full year of performance for each competition. As of July 2008, we found that DOL had completed 18 of these reviews, based on the 21 competitions that were completed during fiscal years 2004 through 2006. Of the 3 competitions that did not have an initial PCAR, one case involved a fiscal year 2005 competition that, according to DOL officials, had been delayed in implementation. The initial PCAR for this competition was later completed in September 2008. The second case was DOL’s very first fiscal year 2004 competition to be won by a private sector service provider, and at the time, DOL had not yet issued the policy and procedures for conducting PCARs. In the last case, the contract was terminated a few months before the initial PCAR was expected to be completed. In addition to calling for initial PCARs, DOL’s policy also calls for annual PCARs thereafter. As of August 2008, we found that DOL had completed two annual follow-up reviews of the 14 cases where 1 year or more had elapsed since the initial PCAR. In 4 cases, implementation of the competitions had been terminated. In the remaining 10 cases, the follow- up reviews were still pending; in 6 cases, 2 or 3 years had elapsed since the initial PCAR was completed. A senior DOL official explained that DOL interpreted OMB guidance as calling for follow-up reviews only for certain standard competitions. However, as noted by OMB officials, OMB’s guidance states that all competitions should still be reviewed as part of the agency’s management oversight activities (unless otherwise exempted by law). Thus, in all of these cases, follow-up PCARs should be completed annually in accordance with DOL’s policy for performance monitoring. Following issuance of DOL’s policy and procedures in 2005, DOL officials generally maintained the records needed for conducting PCARs, but this was not the case at the outset of DOL’s competitive sourcing program. Independent review officials noted that they were unable to fully assess four competitions completed in fiscal year 2004 because of missing documentation. For example, these reviewers noted that records such as the initial solicitation and public announcement of the competition, backup cost information, and the performance decision were missing. DOL officials explained that these fiscal year 2004 competitions were the department’s first under its competitive sourcing program and that they experienced a learning curve. They said that the missing files for all of these competitions have been recovered and corrective actions such as recreating the files have been taken. The independent review officials for all four PCARs also noted that the files had been recreated for each of their competitions. Subsequent reviews of other competitions completed in fiscal years 2005 through 2007 did not cite similar problems. According to the PCARs, criteria to measure performance had been established for half of the competitions reviewed. OMB Circular No. A-76 calls for quality assurance and quality control plans to be established to assist agencies with monitoring the performance of winning service providers for standard competitions. Although OMB Circular No. A-76 does not specify a requirement for streamlined competitions, DOL’s policy and procedures call for streamlined competitions to establish quality assurance plans or, at a minimum, abbreviated work requirements, with quality control plans optional in some cases. Of the 18 initial PCARs completed as of July 2008, we found that independent reviewers identified a lack of quality assurance plans in nine cases and a lack of quality control plans in seven cases (all of which were streamlined competitions). In three of the nine cases lacking quality assurance plans, reviewers noted the difficulty in assessing the performance of a winning service provider without any kind of general standards or requirements that may be used to measure performance. In addition, in one case that had established a quality control plan, the independent review official commented that the service provider who had won the competition was not utilizing the quality control plan. The majority of the 18 initial PCARs completed as of July 2008 reported information on lessons learned: 13 provided this information, but the remaining 5 did not. OMB Circular No. A-76 calls for agencies to allocate resources to effectively apply a clear, transparent, and consistent competition process based on lessons learned and best practices. DOL policy and procedures also state that reporting lessons learned in a competition should be documented in each PCAR. Yet, a senior DOL management official stated that DOL considers providing lessons learned in a PCAR to be a best practice, rather than a requirement, and that the “lessons learned” often can be found elsewhere in the body of the review. However, in three initial PCARs, reviewers noted specifically that there were no lessons learned identified or reported in any part of the reviews. In one other follow-up PCAR, the reviewer noted that lessons learned were not formally documented, but the in-house organization has effectively applied lessons learned after the competition decision. DOL does not ensure that deficiencies identified and recommendations made in initial PCARs are tracked and followed up on at a departmentwide level. Instead, DOL relies on an ad hoc process. As a result, DOL is hindered in its ability to systematically monitor performance trends and determine if the winning service providers are performing more efficiently than the prior service providers. OMB Circular No. A-76 directs agencies to maintain a database to track the execution of competitions through completion of the last performance period (or cancellation of the competition), and to post best practices and lessons learned. In addition, guidance on internal controls from OMB, GAO, and others typically points out that taking a more systematic approach to identifying weaknesses and needed improvements enhances the accountability and effectiveness of an agency’s programs. For example, OMB Circular No. A-123 directs agencies and individual federal managers to take systematic and proactive measures to identify needed improvements and to take corresponding corrective action to improve the accountability and effectiveness of their programs. OMB Circular No. A- 123 also directs agencies to carefully consider whether systemic weaknesses exist that adversely affect internal control across organizational or program lines. With respect to internal controls, GAO has issued standards which state that assessing the quality of performance over time is a key aspect of internal control monitoring in a government agency and that managers need to compare actual performance to planned or expected results throughout the organization and analyze significant differences. In addition, GAO’s Commercial Activities Panel report states that methods to track success or deviation from objectives are required to ensure accountability. All of DOL’s 18 initial PCARs completed as of July 2008 contained recommendations for improvements for each of their competitions. The recommendations included suggestions such as modifying the performance work statement to more accurately reflect the workload of the winning service provider, developing educational briefings, and providing an example of a completed PCAR in DOL’s policy and procedures for conducting performance reviews. But DOL does not track such recommendations at the departmentwide level. According to a senior DOL official in the Office of Asset and Resource Management, it is the responsibility of each individual DOL office—such as the Mine Safety and Health Administration or the Office of Administrative Law Judges—to document and respond to deficiencies and recommendations noted in the initial PCARs. Information about whether any deficiencies have or have not been addressed is maintained only at the individual office level. At our request, DOL officials from individual offices were able to provide information for a sample of six competitions that described how they had followed up on some of the issues reported in the initial PCARs. For example, DOL officials stated that after tracking the findings from one PCAR, they decided to conduct a follow-up work management study that provided a blueprint for undertaking a series of programmatic and quality assurance surveillance improvements. Senior DOL officials also told us that they have an executive steering committee, with members from its competitive sourcing, human resources, and labor management relations offices, that meets weekly to discuss items that need to be adjusted in competitive sourcing competitions. However, as one DOL senior management official acknowledged, they do not always follow up on all of the problems that they keep on file. Thus, DOL’s ad hoc system does not currently take a systematic approach to identifying weaknesses and needed improvements to enhance the effectiveness and accountability of its competitive sourcing program across the organization, as called for by OMB guidance and GAO internal control standards. DOL’s savings reports for competitive sourcing, while adhering to OMB guidance, exclude a number of substantial costs and also are unreliable. OMB’s guidance directs agencies to exclude certain costs associated with the competitions, such as some staff costs and costs incurred before the competition’s announcement. These costs can be substantial. In addition, DOL’s savings reports are unreliable for a number of reasons. For example, we found cases of inflated savings reports due to calculation errors, the use of projections rather than actual costs, and the use of baseline costs that were inaccurate and misrepresented actual savings. DOL’s savings reports to Congress are not comprehensive because they exclude substantial costs associated with the competition process. Although these reporting practices conform to OMB’s guidance for competitive sourcing, reporting costs in this way does not comprehensively assess competitive sourcing as a tool to manage a particular commercial activity, compared with other possible management tools. The Consolidated Appropriations Act, 2004, established a requirement for all executive agencies to report on their competitive sourcing efforts for the prior fiscal year. As part of this law, Congress requires agencies to report the incremental cost directly attributable to conducting the competitions, including costs attributable to paying outside consultants and contractors; an estimate of the total anticipated savings, or a quantifiable description of improvements in service or performance, derived from completed competitions; and actual savings, or a quantifiable description of improvements in service or performance, derived from the implementation of competitions completed after May 29, 2003. In its oversight role for competitive sourcing, OMB issues a yearly memorandum providing guidance to agency heads on how to develop this report. From 2004 through 2007, these memos have directed agencies to exclude certain costs that are associated with the competition process (see table 2). OMB officials told us that their policy directs agencies to exclude certain costs because these costs reflect what would be incurred as part of an agency’s typical management responsibilities. For example, OMB directs agencies to exclude the costs of precompetition planning and agency staff time spent on competition activities, as these activities can help the agency identify and correct performance gaps and improve efficiency and should be taking place whether or not the agency is conducting any competitions. Additionally, OMB officials explained that transition costs associated with competitions should be excluded because such costs also occur with other management processes, such as new program re- engineering and separation payments provided to employees who are displaced by a downsizing. Similarly, they explained that the costs associated with conducting PCARs should be excluded because these reviews help organizations identify and correct performance gaps in their work groups and should be considered as part of normal business operations. OMB officials commented that they do not believe that excluding these costs has a major impact on an agency’s ability to determine the cost-effectiveness of competitive sourcing as a management tool. However, because OMB’s guidance directs agencies to exclude certain costs, the full cost associated with DOL’s competitive sourcing program is not transparent. Since 1990, we have reported that improvements in the completeness and accuracy of savings reports of competitive sourcing could help present a more comprehensive picture of program costs and benefits and help determine the most cost-effective use of resources. For example, in our reviews of the competitive sourcing programs at DOD and USDA’s Forest Service, we recommended that these agencies improve the way they account for and report costs associated with their competitive sourcing programs. In this review, we found similar issues with the comprehensiveness of DOL’s savings reports. Specifically, DOL reported a total of $15.7 million in savings and $4.3 million in competition costs for all of its completed competitions for fiscal years 2004 through 2007. While DOL reported these savings in conformance with OMB guidance, we found that the excluded costs attributable to competitive sourcing over this period were substantial, and—importantly—it is not clear that these costs would be incurred when using a commercial management tool other than competitive sourcing. For example, consistent with OMB guidance, DOL excluded costs attributable to the time in-house staff spent on assisting with competition activities (staff not dedicated to central oversight of DOL’s competitive sourcing program). While these staff are already paid by the government, their time spent away from regular work duties represents a cost that is attributable to the competition process. We were not able to obtain specific estimates on the number of hours that such staff members spent on competition activities, since DOL does not require its offices to record this information. However, employees in one office who were at the GS-12 level or higher estimated that they worked a total of 2,263 hours on one competition. Including these staff costs would have doubled the costs reported by DOL for this competition. Employee responses in our interviews suggest that the amount of time employee staff spent assisting on competitions varied greatly, with some staff members spending little, if any, time on competition activities and others who reported spending one- quarter to one-half of their total working time over the course of a year. According to OMB’s guidance, agencies should also exclude costs incurred during the preliminary planning phase of a competition, such as the use of contractors, as well as other costs that are directly related to the conduct of the competition (see table 2). DOL employed private sector consultants to conduct precompetition planning, including feasibility studies before the competition phase, and to conduct PCARs following the competition. DOL also employed private sector consultants for other activities related to competitive sourcing, such as to conduct business and industry analyses to determine the likelihood of generating private sector offers and to review the government positions open for bidding to determine if they had been appropriately designated during the FAIR Act inventory process. In addition, OMB guidance does not require agencies to include many of the transition costs directly associated with generating savings from competitive sourcing activities, such as the costs of voluntary separation payments and system re-engineering costs. For example, according to DOL data, in calendar year 2006, 14 employees were provided voluntary incentive payments due to competitive sourcing that totaled $350,000. Including these costs would have increased total completed competition costs by 32 percent for the year. In addition to these costs, one competition utilized a newly re-engineered process to decrease total staff hours and to help generate a reported $3.3 million in savings from fiscal year 2005 to fiscal year 2007. The costs of full-time staff hours spent on the re-engineering process were not shown as costs of competitive sourcing, and DOL did not have information on the amount of staff time used. Finally, OMB guidance does not require agencies’ savings reports to include the costs of monitoring performance after the winning service provider begins its activities. However, we found that the PCARs are often conducted by contractors or consultants to monitor competitive sourcing performance for DOL and that they represent a cost in addition to normal federal employee and contractor oversight costs. DOL spent a total of $126,614 on PCARs conducted by consultants as of July 2008. In addition to excluding costs, DOL’s savings reports are unreliable. We reviewed the process DOL uses to compile its reports for a sample of competitions. We found a number of calculation errors in the sample; we also found cases where DOL used projections rather than actual costs to estimate savings or used a baseline that was inaccurate and overstated savings. We randomly selected three competitions for review to determine the accuracy and reliability of DOL’s savings reports. While not necessarily representative of all DOL competitions, these three savings reports contained inaccuracies—with two of the three containing significant errors that inflated the reported savings achieved through those competitions. For example, in the first competition, won by a private sector service provider, DOL reported $2.7 million in savings from fiscal year 2005 through fiscal year 2007. This savings figure did not include contract administration costs that are directed to be included according to OMB guidance. By excluding these costs, DOL overstated its savings by about $185,000 per year, or 25 percent. In the second competition, DOL used an incorrect cost value that excluded some employee wage costs when calculating savings. This error inflated the reported savings by almost $169,000, or 22 percent, for fiscal year 2006. In the third competition, though the inaccuracy was less significant, DOL reported a full year’s worth of savings for fiscal year 2006, even though the new provider was not phased in until 7 months into the new fiscal year. DOL officials stated that the savings estimate was an interim figure that was used before the actual costs were updated for fiscal year 2008. DOL used projections—rather than actual costs—to report $9.3 million of its $15.7 million in savings to Congress, even though OMB guidance specifies that calculating savings based on actual costs rather than projections is preferred. OMB guidance states that agencies may use projections as an interim estimate but that the actual numbers should be used as soon as they are available. Savings reports based on projections can be less accurate than reports using actual numbers because projections use average salaries for employees estimated during the competition, as well as projected staffing and hours that do not always reflect true personnel costs. Projections also exclude “retained pay,” which is pay to employees who receive grade demotions but keep their original pay due to worker protections. By using projections, these costs are excluded from competition savings reports but would be included if actual costs were used. For example, in one competition won by the in-house agency (MEO), 8 of the 9.3 full time employees in the MEO received retained pay after the competition. In total, a DOL review noted that retained pay for these employees caused the actual personnel costs to be 45 percent higher than the original estimated costs for the MEO. By using projections rather than actual values in estimating savings, DOL excluded the higher actual personnel costs for fiscal years 2004 and 2005, even though actual numbers were available. Of the 18 initial PCARs that we reviewed, 8 noted that organizational or workload changes had occurred. For example, in one instance, a DOL office lost 45 FTE positions in fiscal year 2006 due to budgetary reasons, with 8 of these lost positions designated for an activity being competed and that, at the time, was in the source selection phase. This reduced the designated FTEs for this competition from 32 to 24. As noted in the PCAR, the private sector service provider who won this competition was chosen on the basis of the smaller 24-employee demand. However, the final savings figure—the difference between the original government provider and the winning private sector service provider—was calculated using the baseline cost of the original 32 FTE service provider. A senior DOL official that we spoke with stated that the original baseline was used because the private sector service provider was doing the same level of work that the government service provider had been doing before. However, as noted in the PCAR, actual workload data was not available for this competition. Because of this, it cannot be known for certain if the same level of work was being performed. Using the baseline of 32 FTEs, rather than 24 FTEs, increased reported savings by almost $2.7 million over the 5-year performance period. Vacancies within agency workgroups also increased reported savings, though it is unclear if these savings should be attributed to competitive sourcing efforts. For example, in five competitions, the in-house government bid won the competition by maintaining its original “as-is” work group organization, and no anticipated future savings were reported because, according to DOL officials, the staff structure did not change. However, in two of the competitions, savings were later reported. In one of these competitions, employee retirements and a decrease in organization workload resulted in vacancies that caused staff wages to be 46 percent lower than those originally projected, and DOL reported savings of $64,000 for fiscal year 2005 and $86,000 for fiscal year 2006. In the second competition, savings of $26,000 were recorded for 1 year, partly due to a vacant position. In addition to staffing vacancies after the competition, vacancies that occur before a competition can inflate reported savings, as the baseline used to calculate savings is determined by the government service provider’s budgeted staffing levels for the year of the planned competition. For example, the same competition discussed previously that reported $3.3 million in savings because of a re-engineered process that decreased workload and required staffing hours, did not change its actual staffing levels at the time of the competition due to pre- existing, budgeted vacancies. Thus, the savings figures based on full staffing levels were inflated. In three separate PCARs, as well as in our interviews with DOL employees, DOL staff were identified as inappropriately contributing to the work assigned to the winning bids, and in some cases, this resulted in overestimating the savings achieved by the competitions. In one case, the bid was won by a private sector contractor, yet the independent reviewer determined that a DOL employee was performing some of the work, noting that “a government employee is assisting a [private sector service provider] employee for up to1-2 hours per day and could total up to 500 hours on an annual basis. The service provider is being supplemented with the government workforce at this time.” DOL officials stated that the government employee was no longer assisting the private sector service provider and that a workload study was being conducted to help address this problem. Additionally, in two cases, staff who were not part of the MEO were found to be contributing to the work assigned to the MEO. For example, in one case, the PCAR indicated that non-MEO workers were found contributing to work within the MEO and that DOL had not included these costs. DOL savings estimates were based on the cost of only the MEO staff; thus, though the non-MEO staff costs were not available, including these costs would have decreased the reported savings of the competition. DOL’s competitions reportedly had negative impacts on morale, even though they rarely resulted in lost jobs or salary reductions for DOL workers. Of the 28 competitions DOL held during fiscal years 2004 through 2007, 23 resulted in formal job changes (that is, changes reflected in personnel actions) for DOL employees—most often, reassignments to different positions at the same or higher salary levels. Many of the workers experiencing personnel actions have been minorities and women. DOL management stated that they made their best efforts to treat well those employees who were involved in the process, in adherence with the Commercial Activities Panel principles; nevertheless, employees we spoke with reported negative impacts on morale. In 23 of the 28 competitive sourcing competitions conducted by DOL during fiscal years 2004 through 2007, personnel actions affected the jobs of a total of 314 DOL employees. Most often, the affected employees were reassigned to new DOL positions. About 79 percent (248 workers) were reassigned to new positions at the same federal grade and salary level (see fig. 2). For example, a worker who was a GS-13 safety and occupational health specialist before the competition was reassigned as a GS-13 safety and occupational health specialist after the competition, but was placed under a different agency management structure designed as part of the winning in-house bid (MEO). Another 15 workers were promoted to a higher federal grade with entitlement to any associated salary increases. Of the 16 workers who were demoted to a lower federal grade, 5 retained their same grade and 9 retained their same salaries that they had before the competition, due to grade and pay retention provisions. All the remaining workers left DOL, including 29 who left voluntarily, either through retirement or through a nonretirement separation with an incentive payment, and 6 who were laid off from the agency through a reduction in force. Of the 314 DOL workers who were affected by personnel actions due to competitive sourcing during fiscal years 2004 through 2007, 47 percent were African-Americans, 60 percent were women, and 89 percent were 40 years old or older—much higher proportions than their representation in the general DOL working population overall. It may be that these population groups more frequently hold the commercial positions eligible for competition compared with the general DOL population. However, DOL does not tabulate demographic data by OMB’s list of FAIR Act function code categories for commercial activities, and the data were not readily available. Thus, we were unable to determine if this could be an underlying cause. Although DOL does not routinely track the demographic characteristics of those affected by competitive sourcing decisions, agency officials were able to gather these data from various sources in response to our request. A comparison of these data with the demographic profile of DOL personnel overall shows that African-Americans comprised about 47 percent of affected workers, compared with 23 percent of the overall DOL working population. Similarly, Native Americans comprised a greater proportion of the affected workers compared with the overall DOL working population. In contrast, the proportions of affected Caucasian, Hispanic/Latino, and Asian/Pacific Islander workers were lower than their representation in the general DOL working population (see fig. 3). Moreover, among those affected, African-Americans experienced more negative impacts. All 16 workers who were demoted, and all 6 workers who were laid off, were African-American. In contrast, of the 15 workers who were promoted, 10 were Caucasian, 3 were African-American, 1 was Hispanic/Latino, and 1 was Asian/Pacific Islander. Similarly, 60 percent of affected workers were women, compared with 50 percent of the DOL working population. Likewise, 89 percent of affected workers were age 40 or over, compared with 75 percent of DOL workers overall, mostly due to the impact of those over age 50. (See fig. 4.) Although DOL management stated that they made their best efforts to treat well those employees whose positions were competed in the competitive sourcing process, almost all DOL employees we spoke with who assisted with competition activities, and whose positions were affected by the competitions, reported that the competitive sourcing process has had a negative impact on morale. The Commercial Activities Panel principles include, among other things, that agencies should base their competitions on a “clear, transparent, and consistently applied process” and ensure that, when competitions are held, they are conducted as “fairly, effectively, and efficiently as possible.” According to DOL management officials, extensive efforts were made to adhere to these principles. DOL issued a guidebook describing how the process worked, and management officials said they made every effort to find a job for all those affected by a competition. Employees were offered reassignments, voluntary early retirement options, separation incentives, and other services for career transition. However, most employees we interviewed said that they were either dissatisfied or very dissatisfied with how DOL has implemented the competitive sourcing process (see fig. 5). Though not a representative sample of all those involved in the process, these interviews included employees who were responsible for assisting with competition activities, as well as employees whose positions had been competed. In general, the employees we interviewed said that the process has harmed morale. For example, some noted that it has led to a lack of trust between staff and management and that these effects appear to be long-lasting. They told us that competitive sourcing had taken away the job security that federal employment used to provide and that this change has harmed the morale of current employees, as well as the agency’s ability to recruit future employees. Some said that even though employees may have ended up benefiting from competitive sourcing, many were still unhappy about having been subjected to the process. They noted that they felt they must have done something wrong for their jobs to have been selected for the competition. Others said that they were no longer in positions that they had been trained for or wanted. Several said that if their jobs were competed again, they would leave the agency. Among those reporting that they were satisfied or very satisfied with the process, several commented on the improved efficiency and effectiveness of the organization after the competition. For example, the employees responsible for overseeing the competition in one location noted that the competition had an overall positive impact on the organization because the winning MEO incorporated a better balance of personnel and resources than what had existed before. According to these employees, they met their goals, saved some money, and came up with a better organization afterward. However, others we spoke with commented that employees are not as willing to put in the extra effort needed to provide high-quality work. For example, some DOL employees told us that the job was still being done but that loyalty and effort has decreased. Moreover, even employees who said they were satisfied with the process noted that there were negative impacts on morale. As the Commercial Activities Panel report describes, the government’s goal for competitive sourcing is to obtain high-quality services at a reasonable cost and to achieve outcomes that represent the best deal for the taxpayer. DOL has conducted public-private competitions under its competitive sourcing program for 4 years and has set up performance and cost reporting systems to track its progress in meeting such goals. Yet these systems have a number of weaknesses, and unless these weaknesses are addressed, they will continue to inhibit DOL’s ability to reliably and comprehensively assess whether the cost of the work performed by the winning service providers—whether in-house government service providers or contracted private sector service providers—achieves the savings promised through the competitive sourcing process. Under OMB’s new Commercial Services Management guidance, agencies are encouraged to use the tool that provides the best value and the most efficient process to manage its commercial activities. However, without a better system to track deficiencies and improvements departmentwide and identify all the costs associated with competitive sourcing, it will be difficult to assess whether competitive sourcing truly provides the best deal for the taxpayer. To accurately determine which management tool is most cost-effective in performing a certain activity, agencies need a full accounting of the costs and performance. Previous GAO reports have cited problems at other federal agencies— DOD and USDA’s Forest Service, in particular—because they did not develop comprehensive estimates for the costs associated with competitive sourcing. This report identifies similar problems at DOL. To enhance the transparency surrounding their estimates of savings from competitive sourcing, federal agencies need to track all costs—including planning costs, transition costs, postcompetition monitoring, and the labor costs of all staff who participate in competitions. We found that DOL does not ensure that identified deficiencies and recommendations are tracked and followed up on at a departmentwide level. Without such departmentwide tracking, DOL is hindered in identifying and monitoring agencywide competitive sourcing performance trends, reliably determining whether all deficiencies or recommendations for improvement have been addressed, or determining whether the new organization is working more efficiently. Moreover, if DOL continues to conduct more competitions that involve multiple DOL offices, the ability to track competitions departmentwide will become increasingly important. We also found that in a sample of three of DOL’s savings reports to Congress, all three contained errors that overstated the savings achieved through competitive sourcing, two of which were significant. Without reliable savings assessments, policymakers do not have the information that they need to determine the effectiveness of competitive sourcing. We are making four recommendations. In the interest of providing agency decision makers and policymakers with more complete information on the total costs associated with competitive sourcing, we recommend that in addition to the current cost reports that OMB requires agencies to prepare, the Director of OMB should: require agencies to systematically report all costs associated with competitive sourcing, including regular FTE staff wages for time spent on planning and conducting competitions, as well as all other precompetition, transition, and implementation costs, including postcompetition monitoring or accountability reviews. To improve the reliability and comprehensiveness of DOL’s performance assessments and savings estimates in its competitive sourcing program, we recommend that the Secretary of Labor take the following three actions: implement a consistently applied, departmentwide system to track identified deficiencies and recommendations for improvement in each of the competitions and the program overall; implement a system to track the full costs associated with managing DOL’s commercial management activities, including—but not limited to—all costs associated with competitive sourcing; and develop and implement a review process to ensure the accuracy of competitive sourcing savings reports to Congress. We provided a draft of this report to the Office of Management and Budget and the Department of Labor for review and comment. Both agencies provided written comments on a draft of our report which are reprinted in appendixes VI and VII, respectively. These agencies also provided us with technical comments that we incorporated, as appropriate. OMB concurred with our conclusion that all agencies can maximize savings and performance benefits by ensuring appropriate internal controls are in place to monitor results but questioned the need for reporting all costs associated with competitive sourcing. OMB stated that certain costs are not necessarily unique to competitive sourcing and would not have a significant impact on the amount of savings. However, as the examples in this report and past GAO reports demonstrate, the lack of complete and accurate savings reports for the competitive sourcing program results in agencies not having a comprehensive, transparent picture of all the costs and benefits associated with the program. Moreover, while not unique to competitive sourcing, some costs could nevertheless vary across the myriad of management tools for improving the delivery of commercial services. Even if OMB does not expect a significantly different result in savings achieved by including these additional costs, agencies need a full accounting of all of the costs associated with competitive sourcing in order to enhance the transparency of their savings estimates and accurately determine if competitive sourcing truly provides the best deal for the taxpayer. OMB stated that it would work with the transition team so the next administration will be fully informed about the costing policies for competitive sourcing as it considers our recommendation. DOL acknowledged that improving cost assessments and performance tracking can provide better tools for managing their competitive sourcing program and agreed with our recommendation to implement a departmentwide system to track identified deficiencies and recommendations for improvement, as well as our recommendation to develop and implement an internal review process to ensure the accuracy of savings reports. However, DOL expressed concerns about our recommendation to implement a system to track full costs related to competitive sourcing in the absence of governmentwide guidance from OMB to do so. While we recognize that DOL is subject to OMB guidance on reporting costs, we continue to maintain that federal agencies like DOL need to track all costs associated with competitive sourcing—whether they are reported or not—so they can accurately determine if competitive sourcing is the most cost- effective tool for managing certain commercial activities. This is a separate issue from our recommendation that the Director of OMB require federal agencies to report all costs associated with competitive sourcing. We maintain that tracking all costs would enhance the transparency surrounding the estimates of savings from competitive sourcing and provide for accountability in connection with sourcing decisions—one of the principles of the Commercial Activities Panel. We are sending copies of this report to interested congressional committees, the Director of the Office of Management and Budget, the Secretary of Labor, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. As required unde directed by the House Committee on Appropriations, this report examines the use of competitive sourcing at the Department of Labor (DOL). The House Committee on Appropriations directed that we review the extent to which DOL has established a reliable and comprehensive system to track costs, savings, and the quality of work performed by contractors, as well as DOL’s adherence to the principles adopted in 2002 by the Commercial Activities Panel chaired by the Comptroller General. In response, GAO’s review focused on the following: r the Consolidated Appropriations Act, 2008, and as 1. The extent to which DOL has established a reliable and comprehensive system to assess the quality of work performed as a result of competitive sourcing. 2. The comprehensiveness and reliability of DOL’s assessments of the savings and costs associated with competitive sourcing. 3. The implications of competitive sourcing for certain DOL worker populations, such as women and minorities. To address these issues, we examined relevant statutes, regulations, and guidance on competitive sourcing, Office of Management and Budget (OMB) guidance, DOL internal policies and guidance on competitive sourcing, annual reports to Congress, DOL Inspector General reports, GAO reports, and related documents. We interviewed DOL officials and employees, OMB officials, two private sector companies, and the president of a leading national private sector trade association representing over 300 companies. We also met with representatives from the American Federation of Government Employees (AFGE)—a union representing 600,000 federal and D.C. government workers nationwide and overseas— and employee representatives to AFGE from DOL. We focused on DOL’s competitive sourcing activities from fiscal years 2004 through 2007. There were no DOL competitive sourcing activities in 2008 for us to review because the Consolidated Appropriations Act, 2008 prohibited funds fr that act from being used for carrying out competitions under OMB Circular No. A-76 until 60 days after this report is provided to the Committees on Appropriations for the House of Representatives and the Senate. To address the first issue, we examined all 18 of DOL’s postcompetition accountability reviews (PCARs) completed as of July 2008 for the 28 competitions conducted from fisc interviewed DOL officials, together with one performance review contractor selected by DOL, about their processes for conducting the PCA acc policy. In addition, we selected a group of six PCARs to evaluate the extent to which DOL management had followed up and addressed the deficiencies and suggestions for improvement that we ide simple random sample of 3 PCARs from among the 13 initial PCARs completed between 2006 and 2007, plus a nonrandom sample of three a significant findings and recommendations that were not present in the three random PCARs selected. al years 2004 through 2007. We Rs, and we evaluated the structure and content of the review ording to DOL and OMB ntified in each of these reviews. The six PCARs were com itional PCARs completed between 2004 and 2 005 that had examples of To address the second issue, we interviewed DOL officials responsible for completing these assessments and reviewed the process they used to complete the savings and cost reports. To assess the accuracy of DOL’s reports, we reviewed DOL’s annual reports to Congress, all 18 of DOL’s PCARs completed as of July 2008, and the cost records for the private contractors involved in assisting with the competition process and completing the PCARs. We reviewed the documents from all 18 initial PCAR and conducted more detailed analyses of the calculations provided in the cos reports for the same 3 randomly selected initial PCARs completed between 2006 and 2007, as described above. We chose to focus our sample on PCARs completed during 2006 and 2007 to ensure that the most recent, full reco were available for analysis. We examined the savings and costs for these three rds competitions, including the contract billing for the private sector consultants employed by DOL during the competition, and compared the results to the amounts reported in DOL’s annual reports to Co ngress. We did not examine the accuracy of the reports for the remaining DOL competitions. Due to this limited sample size, our findings should not be used to make inference all of DOL’s competitions. Finally, we obtained anecdotal evidence of the ome staff spent on competitive sourcing activities number of hours that s during our group interviews with staff members involved in assisting competitions (see below). with the To address the third issue, we analyzed DOL’s demographic data on total personnel departmentwide and on personnel who experienced personnel as a result of the competitive sourcing process. We did not assess the demographic characteristics of DOL personnel by OMB’s list of the Federal Activities because DOL does not tabulate demographic data in that way and the data were not readily available. To obtain employee views on the process and impacts on morale, we conducted group interviews with 60 DOL employees affected by competitive sourcing competitions in four locations: Arlington Heights, Beckley, West Virginia; San Francisco, California; and Washington, D.C. selected the four group interview locations in order to obtain perspectives fro range of geographic locations and from competitions of different sizes. On ce the four locations were determined, we selected five competitions as our focus: one large competition that affected personnel at all four sites (though mostly personnel in D.C.); one smaller competition in D.C.; and three additional competitions—one that affected a large number of personnel at each of the thr sites outside of D.C. (see table 3). Once agencies have designated all their activities as either inherently governmental or commercial, OMB Circular No. A-76 requires agencies to further categorize their commercial activities according to six “reason codes” labeled A through F. Only one category—Reason B—signifies suitability for competitive sourcing that year. For example, in fiscal year 2006, DOL categorized about 20 percent of its total full-time employees (FTE) as Reason B: suitable for a streamlined or standard competition (see table 5). Section 647(b) of Division F of the Consolidated Appropriations Act, 2004 requires agencies to report their competitive sourcing activities to Congress at the end of each calendar year. These reports are to include the total numb er of competitions announced and completed; the incremental costs directly attributable to conducting those competitions; and the savings—both actual and anticipated—derived from such competitions. In addition, OMB Circular sts No. A-76 outlines the requirements for monitoring the performance and co service provider following a competitive sourcing competition, of the winning whether the winner is the in-house government service provider or a service provider from the private sector. (See table 6.) Division F of the Consolidated Appropriations Act, 2004 (Pub. L. No. 108-199 (2004)) and OMB Memorandum M-08-02 (October 31, 2007) Completed at the end of each calendar year. general description of competitive sourcing process; and itions. OMB Circular No. A- Completed by 76 (Revised May 29, 2003) and OMB Memorandum for the President’s Management Council: Validating the Results of Public-Private Competition (April 13, 2007) Regardless of the selected ser performance period, as determined by the agency. monitor performance for all performance periods stated implement the quality assurance surveillance plan; retain the solicitation and any other documentation from the streamlined or standard competition as part of the competition file; maintain the currency of the contract file, consistent with the Federal Acquisition Regulation; record the actual cost of performance by performance period; and monitor, collect, and report performance information, consistent with the Federal Acquisition Regulation. The April 13, 2007 OMB Memo also directs agencies to have a plan in place to independently validate results on a reasonable sampling of covered competitions. DOL’s Office of Asset and Resource Management is responsible for coordinating the PCARs of the winning service providers, in accordance with OMB guidance and the Federal Acquisition Regulation. The following checklist specifies preaudit or review actions that DOL policy and procedures direct officials to document as part of the PCAR. (Note that not all items included in the checklist are applicable for all competitions.) Location (State) Personnel actions resulted. PCAR completed. Implementation terminated in May 2007. Personnel actions resulted. No PCAR completed. Ongoing. Personnel actions resulted. PCAR completed. Ongoing. Personnel actions resulted. PCARs completed. Ongoing. Personnel actions resulted. PCARs completed. Ongoing. Personnel actions resulted. PCARs completed. Ongoing. Personnel actions resulted. PCAR completed. Implementation terminated in May 2007. No personnel actions. PCAR completed. Ongoing. Location (State) Number of FTEs in study el Personn actions res PCAR completed. Implementatio terminated in May 2007. ulted. Personnel actions res PCAR completed. Implementatio terminated ind. May 2007. No personnel actions. PCAR completed. Ongoing. No personn actions. PCAR completed. Ongoing. Personnel actions res PCAR completed. Implementatioulted. terminated in May 2007. Personnel actions resulted. No PCAR completed as of July 2008. O ngoing. P actions resulte PCAR completed. Ongoing. d. Personnel actions resulte PCAR completed. Ongoing. d. Location (State) Personnel actions resulted. PCAR completed. Ongoing. Personnel actions result PCAR completed. Ongoing. ed. Personnel actions resulted. PCAR completed. Ongoing. Personnel actions resul PCAR completed. Ongoing. ted. Personnel actions res No PCAR completed becaus implementat ion was terminated in May 2007.ulted. d. Personnel actions resulte No PCAR completed July 2008. Ongoing. Location (State) Personnel actions resulted. No PCAR completed as of July 2008. Ongoing. Personnel actions No PCAR completed as of July 2008 because implementation of competitio was suspended. resulted. Personnel actions resulted. No PCAR completed as of July 2008 because implementation of competition was suspended.e, f Personnel actions resulted. No PCAR completed as of July 2008 because implementation of competition was suspended. No personnel actions. No PCAR completed because contract was terminated in May 2007. ILAB MSHA OALJ OASAM OASP OCFO OLMS OSBP OSHA OWCP VETS WB e of Administrati e of the Assistant Se e of the Assistant S e of the Chief Fin e of Labor Mana e of Small Business cretary for Administration and Manag nal Safety and Health Administration fter the first full year of performance followi Thus, for competitions completed in fiscal year 2007, PCARs normally would be co than the end of September 200; however, as noted, several have been suspende ng a competi nducted no later d. tion. In six instances, the implementation of completed competitions involving MHSA FT terminated to comply with Pub. L. No. 110-2, §6602 (2007). Competitions” are included h r d competition , §6602 (2007). h Pub. In addition to the contact named above, Bill J. Keller, (Assistant Director), Kristy L. Kennedy, Margie K. Shields, Nicholas L. Weeks, Jeffrey W. Weinstein, Doreen S. Feldman, Alexander G. Galuten, William T. Woods, and Jessica S. Orr made significant contributions to this report. Department of Defense: Department of Defense Pilot Authority for Acquiring Information Technology Services under OMB Circular A-76. GAO-08-753R. Washington, D.C.: May 29, 2008. Defense Management: DOD Needs to Reexamine Its Extensive Reli on Contractors and Continue to Improve Management and Oversight. GAO-08-572T. Washington, D.C.: March 11, 2008. orest Service: Better Planning, Guidance, and Data Are Needed to F Improve Management of the Competitive Sourcing Program. GAO-08-195. Washington, D.C.: January 22, 2008. Federal-Aid Highways: Increased Reliance on Contractors Can Pose Oversight Challenges for Federal and State Officials. GAO-08-198. Washington, D.C.: January 8, 2008. Department of Homeland Security: Risk Assessment and Enhanced Oversight Needed to Manage Reliance on Contractors. GAO-08-142T. Washington, D.C.: October 17, 2007. Defense Budget: Trends in Operation and Maintenance Costs and Support Services Contracting. GAO-07-631. Washington, D.C.: May 18, 2007. Implementation of OMB Circular No. A-76 at Science Agencies. GAO-07-434R. Washington, D.C.: March 16, 2007. Competitive Sourcing: Greater Emphasis Needed on Increasing Efficiency and Improving Performance. GAO-04-367. Washington, D.C.: February 27, 2004. Competitive Sourcing: Implementation Will Be Challenging for Federal Agencies. GAO-03-1022T. Washington, D.C.: July 24, 2003. Competitive Sourcing: Implementation Will Be Key to Success of New Circular A-76. GAO-03-943T. Washington, D.C.: June 26, 2003. Commercial Activities Panel: Improving the Sourcing Decisions of the Government; Final Report. GAO/A03209. Washington, D.C.: April 30, 2002. Competitive Sourcing: Challenges in Expanding A-76 Governmentwide. GAO-02-498T. Washington, D.C.: March 6, 2002. DOD Competitive Sourcing: Effects of A-76 Studies on Federal Employees’ Employment, Pay, and Benefits Vary. GAO-01-388. Washington, D.C.: March 16, 2001. DOD Competitive Sourcing: Some Progress, but Continuing Challenges Remain in Meeting Program Goals. GAO/NSIAD-00-106. Washington, D.C.: August 8, 2000. DOD Competitive Sourcing: Savings Are Occurring, but Actions Are Needed to Improve Accuracy of Savings Estimates. GAO/NSIAD Washington, D.C.: August 8, 2000. -00-107. OMB Circular A-76: DOD’s Reported Savings Figures Incomplete and Inaccurate. GAO/GGD-90-58. Washington, D.C.: March 15, 1990.
|
Competition between federal and private organizations to provide services--referred to as "competitive sourcing"--can be one way to help achieve greater efficiency in government. Under guidance from the Office of Management and Budget (OMB), competitive sourcing has been implemented at various executive branch agencies over the years. As required under the Consolidated Appropriations Act, 2008 and directed by House Report 110-231, this report examines the use of competitive sourcing at the Department of Labor (DOL). Specifically, GAO examined the comprehensiveness and reliability of DOL's performance and cost assessments in accordance with OMB and DOL guidance as well as the impact of competitive sourcing on certain DOL workers. To address these issues, GAO reviewed relevant statutes, guidance, reports and personnel actions; and interviewed OMB and DOL officials and 60 DOL staff, grouped by role, in four locations. DOL first began conducting public-private competitions as part of its competitive sourcing program in fiscal year 2004, and since that time, it has set up performance and cost reporting systems to monitor progress in meeting the goals of competitive sourcing--that is, to obtain high-quality services at a reasonable cost and to achieve outcomes that represent the best deal for the taxpayer. For the most part, we found that DOL's policies and procedures were followed in conducting competitive sourcing activities; however, a number of weaknesses inhibit DOL's ability to reliably and comprehensively assess whether competitive sourcing achieves the outcomes promised. DOL lacks a departmentwide process for tracking and addressing deficiencies and recommendations for improvements that are identified in postcompetition accountability reviews. Though consistent with OMB guidance, DOL excluded a number of substantial costs in its reports to Congress--such as the costs for precompetition planning, certain transition costs and staff time, and postcompetition review activities--thereby understating the full costs of this contracting approach. DOL's savings reports are not reliable: a sample of three reports contained inaccuracies, and others used projections when actual numbers were available, which sometimes resulted in overstated savings. Because of these and other weaknesses, DOL is hindered in its ability to determine if services are being provided more efficiently as a result of competitive sourcing. Moreover, though not a representative sample of DOL personnel, in GAO's interviews with 60 employees involved with five competitions (including employees who assisted with competition activities, as well as employees whose positions were affected by the competitions), most said that they were dissatisfied with how the competitive sourcing process was implemented and that it had a negative impact on morale. Overall, DOL's competitions have resulted in few job losses or salary reductions. Among the 314 workers who experienced a personnel action, 263 were reassigned to new positions with the same title and pay or were promoted. In addition, of the 16 workers who were demoted, 14 were able to retain their same grade or pay. At the same time, certain groups have been impacted more than others. For example, though small in numbers, all 22 of those who were either demoted or laid off were African-American, while 10 of the 15 workers who were promoted were Caucasian. OMB recently issued new guidance that directs agencies to use a variety of tools to manage their commercial activities, including--but not limited to-- competitive sourcing. However, unless agencies are required to comprehensively track all the costs associated with competitive sourcing, it will be difficult to assess which tool may provide the best outcome in terms of efficiency in the management of commercial activities.
|
The purpose of the current redress system, which grew out of the Civil Service Reform Act of 1978 (CSRA) and related legal and regulatory decisions that have occurred over the past 16 years, is to uphold the merit system by ensuring that federal employees are protected against arbitrary agency actions and prohibited personnel practices, such as discrimination or retaliation for whistleblowing. While one of the purposes of CSRA was to streamline the previous redress system, the scheme that has emerged is far from simple. Today, four independent adjudicatory agencies can handle employee complaints or appeals: the Merit Systems Protection Board (MSPB), the Equal Employment Opportunity Commission (EEOC), the Office of Special Counsel (OSC), and the Federal Labor Relations Authority (FLRA). While these agencies’ boundaries may appear to have been neatly drawn, in practice the redress system forms a tangled scheme. To begin with, a given case may be brought before more than one of these agencies—a circumstance that adds time-consuming steps to the redress process and may result in the adjudicatory agencies reviewing each other’s decisions. Moreover, each of the adjudicatory agencies has its own procedures and its own body of case law. Each varies from the next in its authority to order corrective actions and enforce its decisions. Further, the law provides for additional review of the adjudicatory agencies’ decisions—or, in the case of discrimination claims, even de novotrials—in the federal courts. Beginning in the employing agency, proceeding through one or more of the adjudicatory bodies, and then carried to its conclusion in court, a single case can take years. Even the typical case can take a long time to resolve—especially if it involves a claim of discrimination. Among discrimination cases closed during fiscal year 1994 for which there was a hearing before an EEOC administrative judge and an appeal of an agency final decision to the Commission itself, the average time from the filing of the complaint with the employing agency to the Commission’s decision on the appeal was over 800 days. legal fees, and court costs. All these costs either go unreported or are impossible to clearly define and measure. Moreover, many of the real implications of this system cannot be measured in dollars. The redress system’s protracted processes and requirements can also divert federal managers from more productive activities and inhibit some of them from taking legitimate actions in response to performance or conduct problems. It is also important to observe that under this system, federal workers have substantially greater employment protections than do private sector employees. Federal employees file workplace discrimination complaints at roughly 6 times the per capita rate of private sector workers. And while some 47 percent of discrimination complaints in the private sector involve the most serious adverse action—termination—only 18 percent of discrimination complaints among federal workers are related to firings. The most frequently cited example of jurisdictional overlap in the redress system is the so-called “mixed case,” under which a career employee who has experienced an adverse action appealable to MSPB, and who feels that the action was based on discrimination, can essentially appeal to both MSPB and EEOC. The employee would first appeal to MSPB, with hearing results further appealable to MSPB’s three-member Board. If the appellant is still unsatisfied, he or she can then appeal MSPB’s decision to EEOC. If EEOC finds discrimination where MSPB did not, the two agencies try to reach an accommodation. If they cannot do so—an event that has occurred only three times in 16 years—a three-member Special Panel is convened to reach a determination. At this point, the employee who is still unsatisfied with the outcome can file a civil action in U.S. district court, where the case can begin again with a de novo trial. The proposed legislation would eliminate the mixed case scenario. This would appear to make good sense, especially in light of the record regarding mixed cases. First, few mixed cases coming before MSPB result in a finding of discrimination. In fiscal year 1994, for example, MSPB decided roughly 2,000 mixed case appeals. It found that discrimination had occurred in just eight. Second, when EEOC reviews MSPB’s decisions in mixed cases, it almost always agrees with them. Again during 1994, EEOC ruled on appellants’ appeals of MSPB’s findings of nondiscrimination in 200 cases. EEOC disagreed with MSPB’s findings in just three. In each instance, MSPB adopted EEOC’s determination. Under the mixed case scenario, an appellant can—at no additional risk to his or her case—have two agencies review the appeal rather than one. MSPB and EEOC rarely differ in their determinations, but an employee has little to lose in asking both agencies to review the issue. Eliminating the possibility of mixed cases would eliminate both the jurisdictional overlap and the inefficiency that accompanies it. When a private sector worker complains of discrimination to EEOC, EEOC investigates the complaint and, if it finds that it has merit, will argue the case on behalf of the complainant in U.S. district court. This treatment is less comprehensive than the treatment afforded executive branch federal workers. The fundamental difference is in EEOC’s role. First, under EEOC’s authority to mandate agency discrimination complaint procedures, the federal employee’s agency must investigate the employee’s assertion. Second, the complainant is entitled to have EEOC adjudicate the case. A federal employee who is unsatisfied with the outcome is still entitled to seek a trial in U.S. district court. The proposed legislation, which would bring discrimination complaint processes more in line with the private sector model, would fundamentally change EEOC’s role. Today, cases involving both an adverse action appealable to MSPB and a claim of discrimination become “mixed cases” in which MSPB’s determination can be opposed by EEOC, and even brought before the Special Panel at EEOC’s insistence. Under the proposed legislation, EEOC would not review MSPB decisions. Instead, it would have the authority to petition the Court of Appeals for the Federal Circuit to review MSPB decisions in which EEOC believed that MSPB misinterpreted EEO case law. EEOC’s role, then, would essentially shift from adjudicator to watchdog. Similarly, in cases involving only a claim of discrimination, EEOC’s role would also change. Today, EEOC mandates that agencies perform investigations of their employee’s discrimination claims, while EEOC itself adjudicates formal complaints. Under the proposed legislation, EEOC would no longer mandate agencies’ discrimination complaint procedures. EEOC would investigate complaints itself, and then determine if the cases had sufficient merit to prosecute before MSPB. EEOC’s role, therefore, would change from adjudicator to investigator and prosecutor. MSPB’s role would also change. For the first time, it would adjudicate discrimination complaints that were not necessarily associated with adverse actions. The redress rights of federal employees would also change dramatically. The most significant changes would involve complainants’ access to formal adjudication, both by an adjudicatory agency and in court. Today, no gatekeeper exists to determine which discrimination cases go to an adjudicatory agency. Under the proposed legislation, EEOC would become that gatekeeper, investigating and determining the merits of individual EEOC complaints and deciding whether to argue these cases before the new adjudicator of EEO matters, MSPB. Today, discrimination complainants who remain unsatisfied after exhausting their administrative redress opportunities at EEOC can initiate an entirely new case in U.S. district court. Under the proposed legislation, any administrative redress opportunities would have been exhausted at MSPB, with recourse only to the U.S. Court of Appeals for the Federal Circuit. That would mean a review in court of the administrative process, not a de novo trial on the merits of the case itself. The proposed legislation would give federal employee discrimination complainants the same opportunity as private-sector employees to take their case to U.S. district court. But it would deny them the right to first pursue formal adjudication within the federal redress apparatus, and then, if still dissatisfied, to start a new case from scratch. The intention of the proposed legislation would be to eliminate what is commonly called the “two bites of the apple.” One significant effect of these proposed changes might be to dampen the number of discrimination complaints reaching the formal adjudicative stage. In earlier testimony, we pointed out that one reason it takes so long to adjudicate discrimination cases is that there are so many of them. From fiscal years 1991 to 1994, for example, the number of discrimination complaints filed increased by 39 percent; the number of requests for a hearing before an EEOC administrative judge increased by about 86 percent; and the number of appeals to EEOC of agency final decisions increased by 42 percent. Meanwhile, the backlog of requests for EEOC hearings increased by 65 percent, and the inventory of appeals to EEOC of agency final decisions tripled. certainly a worthwhile goal. However, any major change in the roles of EEOC or MSPB—or in other aspects of the discrimination complaint process—will have broad implications and require careful examination. For example, changes in the adjudicatory responsibilities of EEOC and MSPB would require major organizational change in both agencies. Further, the staffing requirements and skill mix at both agencies would change with their new responsibilities; EEOC, for instance, might need more investigators and fewer administrative judges than it does today. In addition, a basic change in adjudicatory redress procedures would have repercussions in the individual federal agencies, which would likely need to develop new processes to handle discrimination complaints. Moreover, cases already in process would need to be accommodated; a transition period to ensure an orderly changeover from the old system to the new would need to be provided and carefully planned. All these issues would need Congress’s close attention if fundamental redress system reform were to be successful. One way of avoiding formal adjudicative procedures is through the use of alternative dispute resolution (ADR). Many private sector firms have adopted ADR as a means of avoiding the time and expense of employee litigation. A number of federal agencies have explored ADR as well, and for the similar purpose of avoiding the costly and time-consuming formalities of the employee redress system. At your request, Mr. Chairman, we have been examining the extent to which federal agencies have been using ADR to settle workplace disputes, as well as the variety of ADR methods they have tried. The particular approaches vary, but include the use of mediation, dispute resolution boards, and ombudsmen. The use of ADR methods was called for under CSRA and underscored by the Administrative Dispute Resolution Act of 1990, the Civil Rights Act of 1991, and regulatory changes made at EEOC. Based not only on the fact that Congress has endorsed ADR in the past, but also that individual agencies have taken ADR initiatives and that MSPB and EEOC have explored their own initiatives, it is clear that the need for finding effective ADR methods is widely recognized in government. Our preliminary study of government ADR efforts, however, indicates that ADR is not yet widely practiced and that the ADR programs in place are, by and large, in their early stages. Most of these involve mediation, particularly to resolve allegations of discrimination before formal complaints are filed. Because ADR programs generally have not been around very long, the results of these efforts are sketchy; however, some agencies claim that these programs have saved time and reduced costs. One example is the Walter Reed Army Medical Center’s Early Dispute Resolution Program, which provides mediation services. From fiscal year 1993 to fiscal year 1995, the number of discrimination complaints at the medical center dropped from 50 to 22—a decrease that Walter Reed officials attribute to the Early Dispute Resolution Program. Moreover, data from the medical center show that, since the program began in October 1994, 63 percent of the cases submitted for mediation have been resolved. Walter Reed officials said that the costs of investigating and adjudicating complaints have been lessened, as well as the amount of productive time lost on the part of complainants and others involved in the cases. This example is an encouraging one, and at your request, Mr. Chairman, we are continuing to study ADR usage in both private and public sector workplaces, to identify lessons that can be applied more widely in the federal government. Based on work we have done so far in the ADR area, we feel that support for ADR is justified. The strength of ADR, some agencies have told us, is in getting beyond charges and countercharges among the parties involved and getting at the underlying personal interests—many of which may have nothing to do with discrimination—that are often the real cause of conflicts in the workplace. But we would caution that, at this point, ADR is in its preliminary stages of development, that good data on its effectiveness are hard to come by, and that the factors necessary for its success have yet to be fully identified. The redress system for federal employees is an area with great promise for change—and not just for improving efficiency, saving money, and improving the timeliness of redress. We feel that effective improvements in the redress system would also improve the fairness and accessibility of the system to employees, and make it easier for managers to manage effectively. Of course, any sweeping change in the redress system would need to be closely examined to ensure that the legitimate rights of federal employees were still protected. Where the balance should be struck is a critical matter for Congress to decide. This concludes my prepared statement, Mr. Chairman. I would be pleased to take any questions that you or other Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed the implications of the Omnibus Civil Service Reform Act of 1996 on the redress system for federal employees. GAO noted that: (1) the proposed legislation would eliminate jurisdictional overlap between the Merit Systems Protection Board (MSPB) and the Equal Employment Opportunity Commission (EEOC); (2) EEOC would be in charge of investigating the merits of individual EEOC complaints and deciding whether to argue these complaints before MSPB; (3) MSPB would adjudicate discrimination complaints that are not associated with adverse actions; (4) the proposed legislation would give complainants' the opportunity to take their case before the U.S. district court, but it would deny them the right to pursue formal adjudication within the federal redress system; (5) the number of discrimination complaints reaching the formal adjudicative stage would be lessened; (6) changes in EEOC and MSPB adjudicatory responsibilities would require major organizational changes in both agencies; (7) basic changes in the adjudicatory redress system would have repercussions for individual federal agencies; (8) a transition period would be needed to ensure an orderly changeover from the old redress system to the new system; and (9) alternative dispute resolution is a good way to avoid the time and expense of employee litigation, but this procedure is in its preliminary stages of development.
|
Wireless networks extend the range of traditional wired networks by using radio waves to transmit data to wireless-enabled devices such as laptops and personal digital assistants. Wireless networks are generally composed of two basic elements: access points and other wireless-enabled devices, such as laptops. Both of these elements rely on radio transmitters and receivers to communicate or “connect” with each other. Access points are physically wired to a conventional network, and they broadcast signals with which a wireless device can connect. The signal broadcast by the access point at regular intervals—several times per second—includes the service set identifier, as well as other information. Typically, this identifier is the name of the network. Wireless devices within range of the signal automatically receive the service set identifier, associate themselves with the wireless network, and request access to the local wired network. Wireless networks are characterized by one of two basic topologies, referred to as infrastructure mode and ad hoc mode. Infrastructure mode—By deploying multiple access points that broadcast overlapping signals, organizations can achieve broad wireless network coverage. Commonly used on campuses or in office buildings, infrastructure mode enables a laptop or other mobile device to be moved about freely while maintaining access to the resources of the wired network (see fig. 1). Ad hoc mode—This type of wireless topology allows wireless devices that are near one another to easily interconnect. In ad hoc mode laptops, desktops, and other wireless-enabled devices can share network functionality without the use of an access point or a wired network connection (see fig. 2). The increased speed of wireless networks has helped to fuel their growth and popularity. The growing popularity of wireless networks can be easily witnessed in urban environments. For example, during a recent test in Washington, D.C., we drove around 15 square blocks and, using a commonly available wireless network scanner, we detected over a thousand wireless networks. Figure 3 depicts a sample of the saturation of wireless networks we detected during our brief test. Wireless networks offer connectivity without the physical restrictions associated with building wired networks. Though generally developed as an extension to an existing wired infrastructure, a wireless network may be stand-alone as well. The key reason for the growth in the use of wireless networks is the increased bandwidth made possible by the 802.11 standard and its successors. The implementation of the 802.11 family of standards increased the data transfer rates offered by wireless networks, making them comparable to those available in the wired environment. The 802.11 standard was first approved by the Institute of Electrical and Electronics Engineers (IEEE) in 1997. IEEE’s goal was to develop and establish a technology standard that insured global interoperability among wireless products, regardless of their manufacturers. This initial wireless standard was useful for certain applications, but the data transfer rate it specified was far slower than that of wired networks. Responding to the data transfer rate limitations set by the initial standard, the IEEE released several additional standards with the intent of increasing the transfer rates and making wireless functionality comparable to that of wired networks. The significant increases in data transfer rates of the new standards, coupled with the availability of affordable wireless-enabled devices, contributed to the rapid adoption of wireless networks. The Federal Information Security Management Act (FISMA) requires each agency to develop, document, and implement an agencywide information security program to provide security for the data and information systems that support the agency’s operations and assets. FISMA gives OMB many responsibilities for overseeing the agency information security policies, including developing and overseeing the implementation of policies and standards for information security; requiring agencies to identify and provide information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, modification, or destruction of federal information and information systems; and coordinating the development of standards and guidance. OMB annually reports to Congress on the progress of agencies’ compliance with FISMA. Accordingly, agencies need to evaluate the risks and develop policies for emerging technologies such as wireless networks. The National Institute of Standards and Technology (NIST) develops standards that agencies are required to follow and guidelines recommending steps that agencies can take to protect their information and information systems. In November 2002, NIST released Wireless Network Security: 802.11, Bluetooth and Handheld Devices (Special Publication 800-48), which is intended to provide agencies with guidance for establishing secure wireless networks. The guidance recognizes that maintaining a secure wireless network is a continuous process requiring additional effort beyond that required to maintain other networks and systems. Accordingly, NIST has recommended that federal agencies perform risk assessments and develop security policies before purchasing wireless technologies and anticipate that their unique security requirements will determine which products should be considered for purchase; wait to deploy wireless networks for essential operations until after agencies have fully assessed the risks to their information and system operations and have determined that they can manage and mitigate those risks; assess risks, test and evaluate security controls more frequently than they would on a wired network. Currently, NIST is in the process of developing a follow-up to this publication, which will reflect the recent updates to the 802.11 network standards. Wireless networks offer federal agencies two primary benefits: increased flexibility and easier installation. Because wireless networks rely on radio transmissions, federal employees can work in a variety of ways. For example, users can take laptops to meetings, create ad hoc networks, and collaboratively develop products or work on projects. In addition, if a federal agency has installed a wireless infrastructure, users with wireless- enabled devices can work throughout the agency’s facilities without having to be in a particular office. Finally, an agency employee traveling with a wireless-enabled device may be able to connect to an agency network via any one of the many public Internet access points or hotspots found in hotels or in commercial, retail, or transportation centers. This ability to connect to the agency’s systems via wireless networks can increase employee productivity. Ease of installation is commonly cited as a key attribute of wireless networks. Generally, deployments of wireless networks do not require the complicated undertakings that are associated with wired networks. For example, the ability to “connect” the network without having to add or pull wires through walls or ceilings or modify the physical network infrastructure can greatly expedite the installation process. As a result, a wireless network can offer a cost-effective alternative to a wired network. In addition to their increased ease of installation, wireless networks can be easily scaled from small peer-to-peer networks to very large enterprise networks that enable roaming over a broad area. For example, an agency can greatly expand the size of its wireless network and the number of users it can serve by increasing the number of access points. Wireless networks face all of the information security risks that are associated with conventional wired networks, such as worms and viruses, malicious attacks, and software vulnerabilities, but there are significant challenges that are unique to the wireless network environment. In implementing wireless networks, federal agencies face three overarching challenges to maintaining the confidentiality, integrity, and availability of their information: protecting against attacks that exploit wireless transmissions, establishing physical control of wireless-enabled devices, and preventing unauthorized wireless deployments. Protecting against wireless network security attacks is challenging because information is broadcast over radio waves and can be accessed more easily by attackers than can data in a conventional wired network. For example, wireless communications that are not appropriately secured are vulnerable to eavesdropping and other attacks. Poorly controlled wireless networks can allow sensitive data, passwords, and other information about an organization’s operations to be easily read by unauthorized users. In addition, wireless networks can experience attacks from unauthorized parties that attempt to modify information or transmissions. Table 1 provides examples of the different types of attacks that can threaten wireless networks and the information that they are transmitting. Physical control of wireless-enabled devices takes on new importance in maintaining information security. Areas of physical risk include the placement and configuration of wireless access points and control of the wireless-enabled device that connects to the agency’s network. For example, it can be difficult to control the distance of wireless network transmissions, because wireless access points can broadcast signals from 150 feet to as far as 1,500 feet, depending on how they are configured. As a result, wireless access points can and do broadcast signals outside building perimeters. Figure 4 illustrates how poorly positioned or improperly configured wireless access points may radiate signals beyond the physical boundaries of the agency’s facility or the range within which the agency desires to send its signal. Wireless signals broadcast from within an agency’s facility that extend through physical walls, windows, and beyond a building’s perimeter—commonly known as “signal leakage”—can increase an agency’s susceptibility to the various attacks described in table 1 above. In addition to the challenge of signal leakage, it can be difficult for wireless network administrators to track the physical location of wireless-enabled devices. For example, in conventional wired networks, users are required to physically plug in to the agency’s networks via cable. This allows administrators to determine where each device is connected. However, with a wireless network, pinpointing a wireless-enabled device’s location can be difficult because the device is mobile. As a result, it can be harder for information security officials to locate unauthorized devices and eliminate the risks they pose. Unauthorized wireless networks create two main challenges for agencies’ information security. The first challenge comes from legitimate agency organizations, employees, or contractors seeking to benefit from the flexibility of wireless networks. Because of the affordability and availability of wireless network equipment, well-meaning individuals might install unauthorized wireless-enabled devices or wireless access points into an agency’s traditional wired network environment without the approval of the agency’s chief information officer. As a result, agency information security officials might be unaware that wireless networks are being used and would therefore be unable to take the appropriate mitigating actions— such as protecting against potential wireless attacks or preventing signal leakage. The second challenge stems from the increasing availability and integration of wireless technology into products such as laptops. For example, agencies that are not seeking to install a wireless network may find that as they purchase new equipment they are buying wireless-enabled devices. In some instances, these devices are not available without wireless technology. As a result, an agency may inadvertently procure wireless network components that could pose risks to its enterprise. It is critical that agencies understand whether or not the equipment they are procuring is wireless-enabled and determine how they will mitigate the risks it can pose to their information and systems. Controls such as policies, practices, and tools can help to mitigate wireless network security challenges that federal agencies face. These controls include developing comprehensive policies that govern the implementation and use of wireless networks, defining configuration requirements that provide guidance on the deployment of available security tools, establishing comprehensive monitoring programs that help to ensure that wireless networks are operating securely, and training employees and contractors effectively in an agency’s wireless policies. Developing comprehensive information security policies that address the security of wireless networks can help agencies mitigate risks. FISMA recognizes that development of policies and procedures is essential to cost- effectively reducing the risks associated with information technology to an acceptable level. NIST specifies 13 elements that should be addressed in a policy for securing wireless networks. These elements can be broadly organized into the following three categories: (1) authorized use, (2) identification of requirements, and (3) security controls. By establishing policies that address the issues in table 2 above, agencies can create a framework for applying practices, tools, and training to help support wireless network security. Defining requirements for how specific wireless security tools or wireless- enabled devices should be used or configured can help to improve network security in accordance with agency policy. For example, configuration requirements can guide agency employees in identifying and setting up wireless security tools such as encryption, authentication, virtual private networks, and firewalls (see table 3). In addition to helping promote the effective and efficient use of security tools, establishing settings or configuration requirements for devices such as wireless access points can help agencies manage the risks of wireless networks. It is important to secure wireless access points to ensure that they are not tampered with or modified. Configuration requirements can guide the placement and signal strength of wireless access points to minimize signal leakage and exposure to attacks. Comprehensive wireless network monitoring programs are important security for protecting wireless networks and their information. Comprehensive wireless monitoring programs usually focus on detecting signal leakage, determining compliance with configuration requirements, and identifying authorized and unauthorized wireless-enabled devices. Effective monitoring programs typically employ site surveys and wireless intrusion detection systems to accomplish these goals. Site surveys involve using wireless monitoring tools that identify wireless-enabled devices such as wireless access points, laptops, and personal digital assistants. Site surveys can include exterior scans of a building to detect signal leakage. Such scans can inform agency personnel about the strength of wireless signals and the effectiveness of wireless access point configuration. In addition, site surveys can assist agencies in detecting unauthorized wireless-enabled devices. A wireless network intrusion detection system can be used to automatically detect inappropriate activity, ensure that configuration requirements are followed, and ensure that only authorized wireless- enabled devices are functioning. Such a detection system scans radio signals to obtain information on a wireless network, analyzes the information based on set security policy, and then responds to the analysis accordingly. An intrusion detection system for wireless networks includes positioning sensors, similar to access points, near authorized access points or in other areas that require monitoring. A wireless detection system can be combined with a system designed for wired networks to provide comprehensive network monitoring, but neither type alone provides adequate security for both wired and wireless networks. Training employees and contractors in an agency’s wireless policies is a fundamental part of ensuring that wireless networks are configured, operated, and used in a secure and appropriate manner. For security policies to be effective, those expected to comply with them must be aware of them. FISMA mandates that agencies provide security awareness training for their personnel, including contractors and other users of information systems that support the operations and assets of the agency. It is important to provide training on technology to ensure that users comply with current policies. NIST also strongly recommends specific training on wireless security and asserts that trained and aware users are the most important protection against wireless risks. Agencies often lack key controls for securing wireless networks, such as comprehensive policies that govern the implementation and use of configuration requirements that provide guidance on the settings and deployment of available security tools, comprehensive monitoring programs that help to ensure that wireless networks are operating securely, and training in an agency’s wireless policies for both employees and contractors. If agencies do not establish effective controls for securing federal wireless networks, federal information and operations can be placed at risk. Many agencies have not developed policies addressing wireless networks, and those that have often omitted key elements. Nine of the 24 major agencies reported having no specific policies and procedures related to wireless networks. Thirteen agencies stated that they had established policies that authorize the operation and use of wireless networks. Twelve of these 13 agencies also reported that their policies extended to the use of wireless networks by contractors. Two federal agencies reported having policies that forbid the use of wireless networks or devices. Policies for many of the agencies did not address acceptable use of wireless networks. For example, 7 of the 13 agencies with policies had not established an acceptable use policy or provided specific guidance on the type of information agency personnel were allowed to transmit using wireless networks. NIST guidance recommends that acceptable use policies delineate the type of information that may be sent over wireless networks, in order to reduce the risk that sensitive information will be exposed. Without establishing acceptable use policies, agencies will not be able to determine whether wireless networks are being used appropriately. The lack of such a policy could result in unauthorized disclosure of agency information or could increase the agency’s risk of a security breach. Thirteen of 24 agencies reported not having configuration requirements for wireless networks. Further, the configuration requirements submitted by the remaining 11 agencies were often incomplete, omitting key elements that NIST guidance identifies—such as the use of and settings for security tools, including encryption, authentication, virtual private networks, and firewalls; the placement and strength of wireless access points to minimize signal leakage; and the physical protection of wireless-enabled devices. Two of the 11 agencies with policies had established configuration requirements that addressed all of these elements. However, the configuration requirements of the other 9 agencies did not cover key areas of wireless security. For example, Three agencies did not have policies explaining how to configure wireless access points and other wireless-enabled devices. Five agencies have not developed detailed guidance describing how to physically secure wireless-enabled devices. Most of the major agencies have not established comprehensive wireless network monitoring programs for detecting signal leakage or ensuring compliance with security policies. For example, 14 agencies, including 4 agencies that permit wireless networks, do not monitor for signal leakage. Additionally, 19 agencies report not monitoring the data flowing through their systems to ensure that users of wireless networks are complying with acceptable use policies. Further, 14 agencies have not established programs to monitor wireless networks to ensure compliance with configuration requirements. Fifteen agencies reported monitoring for the existence of unauthorized or “rogue” wireless networks. Of these 15 agencies, only 6 continuously monitored their facilities 24 hours a day. The remaining 9 agencies monitored only periodically, sometimes as rarely as twice a year. The lack of continuous monitoring, combined with the ease of setting up wireless networks, creates a situation in which wireless networks can be operating in agencies with neither authorization nor the required security configurations. Consequently, agencies may not be able to determine whether security policies are being implemented in an appropriate manner, whether employees are conforming to policy, and—more importantly— they may not have a full understanding of the existing risks to agency information and information systems. Even if an agency does not allow wireless networks, monitoring is one of the most effective ways to ensure compliance with agency policy. Eighteen of the 24 agencies have not established any training programs for their employees and contractors on wireless security or the policies surrounding wireless networks. FISMA requires that agencies provide information security awareness training to all personnel, including contractors. Awareness about wireless security challenges can assist employees in complying with policies and procedures to reduce agency information security risks. Without such training, employees and contractors may practice behaviors that threaten the safety of the agency’s data. For example, employees may use wireless-enabled devices— configured to attach to wireless networks automatically—to access the agency’s private wired network. An attacker might connect to such a device, accessing the agency’s network under a legitimate user’s authority. We tested the wireless network security at the headquarters of six federal agencies in Washington, D.C., and identified significant weaknesses related to signal leakage, configuration, and unauthorized devices. Signal leakage—We were able to detect signal leakage outside the headquarters buildings at all six agencies. In one case, we were able to detect an agency’s network while we were testing at another agency several blocks away. By not managing signal leakage, agencies increase their susceptibility to attack. In addition, the confidentiality of agency data may be diminished because an unauthorized user could be eavesdropping or monitoring wireless traffic. Insecure configurations—We also found wireless-enabled devices operating with insecure configurations at all six agencies. For example, at one agency over 90 wireless laptops were attempting to associate with wireless networks while they were connected to the agency’s wired networks. This configuration could provide unauthorized access to an agency’s internal networks. In all six agencies we found wireless devices operating in ad hoc mode. In over half of these cases the ad hoc networks could be detected outside of the building and could have provided access to the agency’s networks. We found these situations at agencies without monitoring programs as well as at agencies with extensive monitoring programs. Unauthorized wireless-enabled devices—We detected unauthorized wireless-enabled devices at all six agencies. These devices included both unauthorized wireless access points and ad hoc wireless networks. None of the six agencies we tested maintained continuous wireless monitoring. Three had programs that would periodically test portions of their facilities; however, periodic monitoring was not sufficient to prevent unauthorized wireless activity. Signal leakage, insecure configuration, and unauthorized wireless devices pose serious risks to the confidentiality, integrity, and availability of the information of the six agencies we tested. Because attackers in a wireless environment can focus on an easily discernable location, such as a headquarters building, federal agencies need to be especially concerned about signal leakage, insecure configurations, and unauthorized devices. If wireless signals emanate from a building, they could make the agency a target of attack. Wireless networks can offer a wide range of benefits to federal agencies, including increased productivity, decreased costs, and additional flexibility for the federal workforce. However, wireless networks also present significant security challenges to agency management. The affordability of wireless technology, along with the increasing integration of wireless capabilities into equipment procured by the federal government, increases the importance of developing appropriate policies, procedures, and practices. Such actions could help ensure that wireless devices and networks do not place federal information and information systems at increased risk. Currently, the lack of key controls in federal agencies means that unauthorized or poorly configured wireless networks could be creating new vulnerabilities. In some instances, the lack of policies and procedures for assessing and protecting wireless networks is impeding agency efforts to effectively address wireless security. In other cases, agencies’ ineffective compliance monitoring hinders their ability to detect unauthorized wireless devices, ensure compliance with agency policies, and supervise behavior on wireless networks. Finally, the majority of agencies have not trained their employees and contractors in the challenges of wireless networking and in agency policies concerning this technology. Our testing at six major federal agencies found significant security weaknesses: signal leakage, insecure configurations of wireless equipment, and unauthorized devices. Wireless network security is a serious, pervasive, and crosscutting challenge to federal agencies, warranting increased attention from OMB. If these challenges are not addressed, federal agency information and operations will be at increased risk. Because of the governmentwide challenges of wireless network security, we recommend that the Director of OMB instruct the federal agencies to ensure that wireless network security is incorporated into their agencywide information security programs, in accordance with FISMA. In particular, agencywide security programs should include robust policies for authorizing the use of the wireless networks, identifying requirements, and establishing security controls for wireless- enabled devices in accordance with NIST guidance; security configuration requirements for wireless devices that include available security tools, such as encryption, authentication, virtual private networks, and firewalls; placement and strength of wireless access points to minimize signal physical protection of wireless-enabled devices; comprehensive monitoring programs, including the use of tools such as site surveys and intrusion detection systems to ensure compliance with configuration requirements; ensure only authorized access and use of wireless networks; and identify unauthorized wireless-enabled devices and activities in the agency’s facilities; and wireless security training for employees and contractors. In providing oral comments on a draft of this report, representatives of OMB’s Office of Information and Regulatory Affairs and Office of General Counsel told us that they generally agreed with the contents of the report. OMB officials told us that NIST is developing updated wireless guidance for the federal agencies, which is scheduled to be issued for comment in August 2005. Further, OMB stressed that the agencies have the primary responsibility for complying with FISMA’s information security management program requirements. OMB told us that as part of its annual review of agency information security programs, it would consider whether agencies’ programs adequately addressed emerging technology issues such as wireless security before approving them. We are sending copies of this report to the Director of OMB and to interested congressional committees. We will provide copies to other interested parties upon request. The report will also be available on GAO’s Web site at http://www.gao.gov. If you have any questions or wish to discuss this report, please contact either Gregory Wilshusen at (202) 512-6244 or Keith Rhodes at (202) 512- 6412. We can also be reached at [email protected] or [email protected]. Key contributors to this report are listed in appendix II. The objectives of our review were to describe the benefits and challenges associated with securing wireless identify the controls (policies, practices, and tools) available to assist federal agencies in securing wireless networks, analyze the wireless policies and practices reported by each of the 24 agencies covered by the Chief Financial Officers (CFO) Act of 1990, and test the security of wireless networks at the headquarters of six major federal agencies in Washington, D.C. For the first three objectives, the scope of our review included (1) the 24 agencies under the CFO Act and focused on wireless networks conforming to the 802.11x standard. For the fourth objective, we tested the wireless network security at 6 major federal agencies. Our review did not evaluate the risks that remote wireless users, such as teleworkers, might pose to agency systems. To determine the benefits and challenges of using 802.11x wireless networks securely, we reviewed federal and private-sector technical documents, including National Institute of Standards and Technology (NIST) guidance and leading private sector practices. Additionally, we documented the various benefits and challenges of wireless networks with representatives from private-sector wireless security providers, federal experts and agency officials, and financial institutions. To determine what controls were available to agencies for securing their 802.11x wireless networks, we reviewed federal and private-sector technical documents, including NIST guidance and leading private-sector practices. Additionally, we documented various controls for securing wireless networks—such as policies, practices, and tools—with representatives of private-sector wireless security providers, federal experts and agency officials, and financial institutions. To determine the wireless security practices and policies used at federal agencies, we conducted a survey of the 24 CFO agencies. We developed a series of questions that were incorporated into a Web-based survey instrument. We tested this instrument with one federal agency and internally at GAO through our Chief Information Officer’s office. The survey included questions on the agencies’ use of wireless networks and their policies and procedures for securing them. For each agency to be surveyed, we identified the office of the chief information officer, notified each office of our work, and, distributed a link to each office via e-mail to allow them to access the Web-based survey. In addition, we discussed the purpose and content of the survey with agency officials when they requested it. All 24 agencies responded to our survey. We did not verify the accuracy of the agencies’ responses; however, we reviewed supporting documentation that the agencies provided to validate their responses. We contacted agency officials when necessary for follow-up. Although this was not a sample survey and, therefore, there were no sampling errors, conducting any survey may introduce errors—commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or analyzed can introduce unwanted variability into the survey results. We took steps in the development of the survey instrument, the data collection, and the data analysis to minimize these nonsampling errors. For example, a survey specialist designed the survey instrument in collaboration with GAO staff with subject-matter expertise. Then, as stated earlier, it was pretested to ensure that the questions were relevant, clearly stated, and easy to comprehend. When the data were analyzed, a second, independent analyst checked all computer programs. Because this was a Web-based survey, respondents entered their answers directly into the electronic questionnaire. This eliminated the need to have the data keyed into a database, thus removing an additional potential source of error. To assess the state of wireless security at a selected group of federal agencies, we conducted onsite network surveys at 6 of the 24 CFO agencies. We selected 6 agencies in various stages of wireless implementation: 2 had established wireless networks, 1 had a pilot system, 2 did not have any authorized wireless networks, and 1 forbade the use of wireless. At each agency’s Washington, D.C., headquarters, we scanned for signal leakage and wireless activity, using wireless monitoring tools both inside and outside the agency’s facility. For security purposes, we do not identify the 6 agencies in the report. We performed our work in the Washington, D.C., metropolitan area from September 2004 to March 2005, in accordance with generally accepted government auditing standards. In addition to the person mentioned above, Mark Canter, Lon Chin, West Coile, Derrick Dicoi, Neil Doherty, Joanne Fiorino, Suzanne Lightman, Kush Malhotra, and Christopher Warweg made key contributions to this report.
|
The use of wireless networks is becoming increasingly popular. Wireless networks extend the range of traditional wired networks by using radio waves to transmit data to wireless-enabled devices such as laptops. They can offer federal agencies many potential benefits but they are difficult to secure. GAO was asked to study the security of wireless networks operating within federal facilities. This report (1) describes the benefits and challenges associated with securing wireless networks, (2) identifies the controls available to assist federal agencies in securing wireless networks, (3) analyzes the wireless security controls reported by each of the 24 agencies under the Chief Financial Officers (CFO) Act of 1990, and (4) assesses the security of wireless networks at the headquarters of six federal agencies in Washington, D.C. Wireless networks offer a wide range of benefits to federal agencies, including increased flexibility and ease of network installation. They also present significant security challenges, including protecting against attacks to wireless networks, establishing physical control over wireless-enabled devices, and preventing unauthorized deployments of wireless networks. To secure wireless devices and networks and protect federal information and information systems, it is crucial for agencies to implement controls--such as developing wireless security policies, configuring their security tools to meet policy requirements, monitoring their wireless networks, and training their staffs in wireless security. However, federal agencies have not fully implemented key controls such as policies, practices, and tools that would enable them to operate wireless networks securely. Further, our tests of the security of wireless networks at six federal agencies revealed unauthorized wireless activity and "signal leakage"--wireless signals broadcasting beyond the perimeter of the building and thereby increasing the networks' susceptibility to attack. Without implementing key controls, agencies cannot adequately secure federal wireless networks and, as a result, their information may be at increased risk of unauthorized disclosure, modification, or destruction.
|
The Post-9/11 GI Bill, which took effect on August 1, 2009, is now VA’s largest educational program. In fiscal year 2014, the Post-9/11 GI Bill program had 790,000 participants and made $10.8 billion in payments for tuition, fees, housing, and books. This program provides benefits generally to veterans who served on active duty for at least 90 days on or after September 11, 2001. Full benefits are available to those who served on active duty for 36 months, for which VA will pay the full in-state tuition and fees at any public school and up to an annual maximum amount at nonprofit and for-profit schools ($21,085 in academic year 2015-16). VA pays schools directly for tuition and fees and sends additional payments for housing and books directly to veterans who are eligible for these payments. Housing benefits are provided to veterans through a monthly housing allowance, paid at the beginning of each month based on the previous month’s enrollment, and the amount depends on the veteran’s rate of academic pursuit (e.g., full- or part-time) and the geographic location of the school they are attending. For veterans to start receiving Post-9/11 GI Bill benefits, school employees, known as school certifying officials, must certify to VA that they are enrolled in classes (see fig. 1). VA recommends that schools certify veteran enrollments prior to the start of the school term. Post-9/11 GI Bill overpayments occur when VA makes a payment in excess of what a veteran is entitled to. This can result from either erroneous payments or subsequent changes in the veteran’s enrollment status or tuition or fee amounts after benefits have already been paid. Enrollment changes cause overpayments because the Post-9/11 GI Bill program, by design, pays tuition to schools in advance based on individual veterans’ expected enrollment. These types of overpayments are not considered “improper” (i.e., made in error) because the payments were originally correct when issued. Once VA begins paying Post-9/11 GI Bill benefits, veterans and schools are responsible for notifying VA of any changes in the veteran’s enrollment (e.g., dropping a class or withdrawing from school) and schools are responsible for notifying VA of any subsequent adjustments in tuition or fees. VA processes these changes, which can increase or decrease the payment amounts veterans are eligible for. When enrollment changes decrease a veteran’s calculated benefit amount, an overpayment is created for any excess funds VA has already paid. For example, tuition and fee overpayments can be created for any tuition and fees paid for classes that a veteran did not complete. Changes in a veteran’s enrollment can also affect the amount of housing and book stipend payments that the veteran is eligible to receive. VA sends tuition and fee payments directly to schools on behalf of students, and VA often holds veterans liable for tuition and fee overpayments in accordance with its statutory authority. Veterans are generally responsible for repaying any overpayments resulting from enrollment changes during the school term (see table 1). Schools are only responsible for repaying these benefits to VA in certain circumstances, such as when the veteran completely withdraws from the school on or before the first day of the term. Veterans are solely responsible for repaying any overpayments of housing or book stipend benefits. VA does not hold veterans liable for overpayments that result from administrative errors or an error in judgment on the part of VA, unless the veteran should have known the overpayment was patently excessive (such as a duplicate payment or a payment amount that exceeds the amount displayed on an award letter), according to VA officials. VA determines the type and amount of any overpayments based on the enrollment and tuition information submitted by schools or after identifying and correcting any school reporting or internal processing errors. When an overpayment is caused by an enrollment change, VA establishes an overpayment debt for the veteran, the school, or both depending on the circumstances (see fig. 2). Overpayment debts are forwarded to VA’s centralized collection office, the Debt Management Center, for collection. Once schools or veterans receive a collection letter from the Debt Management Center, they can arrange to repay the debt in full, set up a payment plan, dispute the existence or amount of the debt, or request a waiver of the debt due to financial hardship or special circumstances. If schools or veterans do not initiate one of these options to address the debt, VA begins pursuing more aggressive collection methods. These can include offsetting—that is, reducing or withholding—future Post-9/11 GI Bill payments or tax returns and reporting debts to credit rating agencies, as illustrated in figure 3. Since veterans are liable for most overpayments, VA collects debts from hundreds of thousands of individual veterans. We have previously compared VA’s processes to those of the Department of Education. The Department of Education uses a different structure than VA for processing and collecting federal student aid overpayments. In contrast to VA, Education generally works directly with schools to return overpayments of federal student aid funds, focusing its collection efforts on a few thousand schools rather than working directly with hundreds of thousands of students. The Department of Education payment systems also allow schools to reconcile any enrollment changes before the term by adjusting the school’s aggregate receipt of federal student aid funds, rather than issuing individual debts and new payments to the school. In contrast, VA processes each tuition payment and collection separately. In reaction to our prior work comparing VA’s processes with the Department of Education’s, VA officials raised concerns about adopting similar practices to the Department of Education. For example, according to VA officials if VA started collecting all tuition overpayments from schools, schools would still be able to bill veterans for overpayment debts and potentially would not allow veterans to reenroll for class until these debts were repaid. In addition, veterans would have to repay any overpayment debts to their school out-of-pocket, rather than through offsets to their Post-9/11 GI Bill housing payments. VA made $416 million in Post-9/11 GI Bill overpayments in fiscal year 2014, or 4 percent of the over $10 billion in benefits paid during that period. The bulk of these overpayments went to veterans instead of schools, as shown in figure 4. Overpayments increased by nearly 20 percent from fiscal year 2013 to 2014, the only 2 years for which data were available, while total program payments increased by just 6 percent. Most overpayments are caused by veteran enrollment changes, over which VA has limited control. In these cases, VA’s original payments were correct based on the veteran’s planned enrollment, but some or all of the original payment became an overpayment when the veteran subsequently dropped a class or withdrew from school. VA estimated that veteran enrollment changes caused 90 percent of high-dollar Post-9/11 GI Bill overpayments (see fig. 5). The remaining high-dollar overpayments were caused by school reporting errors (e.g., submitting incorrect enrollment or tuition information) and VA processing errors (e.g., duplicate payments or data entry errors). Approximately 1 out of 4 beneficiaries incurred an overpayment in fiscal year 2014—more than 225,000 veterans. The median amount that veterans had to repay was about $570, which could correspond to dropping a single class during the term and can be a sizable debt for a college student with limited income. Some veterans incurred much larger overpayments. For example, over 7,000 veterans had overpayments of more than $5,000, which can occur when a veteran completely withdraws from school or receives several months of housing overpayments (see fig. 6). Almost 6,000 schools incurred overpayments in fiscal year 2014, often for multiple veterans, with a total median debt of $7,800 per school. School overpayments arise when veterans never attended classes or withdrew from school on or before the start date, although they can also be caused by VA and school error in certain cases. About 5 percent of schools accounted for almost half of all school overpayments. VA had $262 million in outstanding Post-9/11 GI Bill overpayment debts, as of November 2014, primarily owed by veterans. More than half of these uncollected debts are from overpayments that occurred in fiscal year 2014, some of which were still in the initial stages of collection and likely have been repaid since VA provided these data. VA has had more time to collect debts from prior years. However, $110 million in uncollected debts are older than 1 year, some of which date back to 2010 (see fig. 7). Veterans are responsible for the vast majority of uncollected overpayment debts since VA has less success collecting debts from veterans than from schools. Of the total amount outstanding, more than 90 percent was owed by veterans rather than owed by schools. Schools had already repaid almost all of the overpayment debts they incurred in fiscal years 2013 and 2014, while veterans had so far repaid 75 percent of overpayments from fiscal year 2013 and 46 percent from 2014. School overpayments are generally collected through direct payments (e.g., check, credit card, electronic funds transfer), while veteran overpayments are collected through several methods, most commonly by deducting the debt amounts from subsequent GI Bill or other VA payments. VA also collected from veterans through direct payments, offsetting other federal payments such as federal tax returns, and private debt collection agencies. VA does not monitor the full extent of Post-9/11 GI Bill overpayments and collections. For example, VA does not regularly track the number of overpayments or the amount of uncollected student debts. Although VA was able to provide this information in response to our data request, it is not something the agency actively monitors on a regular basis. VA has instead focused its current monitoring efforts on one subset of overpayments, those that are considered “improper.” As it is legally required to do, VA reports an estimated improper payment rate for different benefit programs. While these rates are a useful government- wide accountability measure, they do not capture the vast majority of Post-9/11 GI Bill overpayments that occur due to subsequent enrollment changes. The approximately 90 percent of overpayments resulting from veteran enrollment changes are not categorized as improper since the payments are correct when issued and only become overpayments at a later date after the veteran makes an enrollment change. As a result, VA’s estimated improper payment rate for the Post-9/11 GI Bill provides little insight into the over $400 million in overpayments the agency made in fiscal year 2014. As for collections, the only specific data on the Post- 9/11 GI Bill that the Debt Management Center actively monitors is for school debts, which only account for about a third of program overpayments. VA officials do not regularly monitor collection rates for student overpayments, which represent the majority of uncollected debts. VA’s limited monitoring of overpayments and collections makes it difficult to effectively manage the Post-9/11 GI Bill program. OMB Circular A-129 instructs all federal agencies to use comprehensive reports on the status of overpayments and other receivables to monitor their effectiveness and enable data-driven decision making. VA’s limited monitoring efforts fall short of this standard. VA’s ability to monitor historical trends in overpayments and collections is in some ways constrained by the limitations of its data systems. Specifically, debts are removed from VA’s main payment and collection database and archived 2 years after repayment. While this limits the availability of summary information for analyzing prior historical trends and performance, VA could replicate the analysis we conducted for this study by periodically calculating key measures from the available data records. One area where VA has made some strides is by monitoring school debts on a monthly basis. However, as evidenced by its significantly limited monitoring activity, VA has not prioritized the need to actively monitor veteran overpayments and collections for the Post 9/11 GI Bill program, although VA officials acknowledged that additional monitoring would help improve program management. By not actively monitoring available data on overpayments and collections, VA cannot assess its efforts to reduce overpayments nor can it gauge the effectiveness of its collection efforts. This also limits VA’s ability to proactively manage the program to address overpayment or collection issues, for example, by identifying trends and targeting outreach to the small number of schools that account for the majority of overpayment dollars. Despite enrollment changes being responsible for most overpayments, VA provides limited guidance to veterans about the possible consequences of enrollment changes. Staff from VA’s Debt Management Center and its Education Call Center, as well as staff who conduct school compliance surveys, said that many veterans who incur overpayments as a result of enrollment changes may not realize that they are doing so. VA informs veterans about their potential liability for overpayment debts resulting from enrollment changes in the letter it sends veterans when they become eligible for benefits. VA also posts responses to some overpayment-related questions on the Post-9/11 GI Bill website. However, VA does not explain—either in the benefits letter, on the Post- 9/11 GI Bill website, or in other places veterans are likely to seek program information—how to avoid creating debts once enrolled in school. The letter veterans receive simply tells them that they are responsible for all debts resulting from reductions or terminations of their enrollment. It does not explain, for example, the difference between VA overpayments and school refund policies or that failure to promptly notify VA of enrollment changes can increase the incidence and amount of housing overpayments. In addition, VA does not disclose its formula for calculating overpayments in any of the guidance it provides to veterans or schools, which makes it difficult for veterans—and school certifying officials who advise them—to accurately estimate the potential tuition overpayments veterans might incur for dropping a class. In contrast, VA requires schools to include information about school refund policies for unused tuition and fee payments in information available to all veterans. As a result, officials at two of the schools we interviewed said that the majority of overpayment issues arise among new student veterans who are not aware of the consequences of enrollment changes until after they have already incurred their first overpayment debt. In other cases, veterans are confused about when overpayments are created. For example, staff from VA’s Debt Management Center explained that some veterans incorrectly think they will not incur a debt if they drop a class before their school’s deadline to add or drop classes. Moreover, our review of complaints veterans have made to VA also demonstrates some veterans’ confusion about how the Post-9/11 GI Bill works. For example, one veteran was shocked to learn he had incurred a debt of over $5,000 stemming from his enrollment changes. According to federal internal control standards, all agencies should ensure they are using adequate means of communicating with external stakeholders who may have a significant impact on the agency achieving its goals. If VA does not enhance its guidance to veterans about its overpayments policies, veterans may continue to incur debts that could be avoided. VA has an optional process for schools to report veteran enrollments that can help prevent overpayments due to enrollment changes, but only a small proportion of schools currently use it. Typically, a school certifying official sends a veteran’s enrollment information (e.g., tuition, fees, term dates) to VA soon after a veteran enrolls in an upcoming term. Once VA processes the claim and makes tuition and fee payments to the school, any subsequent enrollment changes can create overpayments. As an alternative, VA gives schools the option of using a two-stage process, called dual-certification. Schools can initially precertify a veteran’s enrollment for $0 for tuition and fees before the term begins, which allows VA to start paying housing benefits without delay. The school can then recertify the enrollment with the actual tuition and fees amount at a later date—e.g., after the period to add or drop classes ends when many enrollment changes have already occurred. Since VA does not send tuition payments until the school certifies an actual tuition and fee amount, dual-certification can help prevent tuition overpayments that occur when a veteran drops a class at the beginning of the term. Despite the potential benefits, dual-certification has not been widely adopted, in part because VA’s guidance to schools does not explain the benefits of using this process. According to a 2013 survey by the National Association of College and University Business Officers, 30 percent of the 239 schools responding to the survey reported that they used dual- certification. Only one of the nine schools we interviewed uses this approach, and an official at this school said that switching to the dual- certification process was responsible for reducing the school’s overpayment amount from $500,000 to $50,000 each semester. In contrast, six of the schools we interviewed send tuition and fee information to VA by the start of the term. An official at one of these schools does not like to use dual-certification because it requires submitting enrollment information to VA twice. A VA official further explained that many schools want to be paid up front, so they submit the course enrollment and expected tuition and fees before the term. Recognizing that dual-certification may not work well for some schools, given the additional work involved and delay in receiving tuition payments, VA does not require this approach. Nevertheless, this can be a useful option for some schools and VA’s guidance to schools does not explain how dual certification can prevent overpayments. Internal control standards stipulate that adequate communication with external parties through guidance and other methods is essential for achieving agency goals. By not providing guidance to schools about the benefits of using dual-certification, VA is missing an opportunity to reduce a potentially large number of overpayments as well as the burden placed on veterans and schools to repay those debts. Finally, unlike most other GI Bill programs, VA has not required veterans in traditional degree and certificate programs using the Post-9/11 GI Bill to regularly verify their enrollment throughout the school term, which exacerbates the incidence and amount of housing benefit overpayments due to delayed reporting of enrollment changes. When veterans reduce their enrollment, they continue to receive housing benefit overpayments each month until VA is notified of the change. Even though VA’s guidance to veterans instructs them to promptly notify their school certifying official and VA of any enrollment changes, they do not always do so. In one case file we reviewed, the veteran withdrew from school one day before the term began in mid-August, but the enrollment change was not reported to VA until late October. As a result, this veteran received an extra 2 months of housing benefits after withdrawing, which created an overpayment of over $3,000. In another case file review, a veteran’s enrollment changes were reported to VA 3 months after they happened. As a result the veteran incurred a housing overpayment of $2,200. These housing overpayments would have been avoided if veterans using the Post-9/11 GI Bill were required to verify their monthly enrollment. VA officials said they would like to require veterans using the Post-9/11 GI Bill to verify their monthly enrollment but would need to develop a new verification process. One option VA has considered is developing a new online system, but VA officials said the agency has not yet developed the system due to budgetary constraints. A VA contractor estimated in 2013 that it would cost approximately $10 million to implement an online verification system, although VA officials were unsure if this estimate is still accurate. Such an investment would provide substantial long-term savings for VA in comparison with the current system by reducing housing overpayments and also help VA comply with federal requirements to establish practices that ensure funds are safeguarded against waste or loss. For example, VA made almost $111 million in housing overpayments in fiscal year 2013, almost $29 million of which was still uncollected as of November 2014. Although requiring veterans to verify their enrollment would not eliminate all of these overpayments, if the contractor’s prior cost estimates are still correct, the new system could still pay for itself in 1 year if it reduced uncollected housing overpayments by just one-third. The potential savings from an enrollment verification system would likely increase in future years as the size of the Post-9/11 GI Bill program continues to grow, creating a long term benefit for VA and taxpayers. Federal internal control standards state that agencies need effective policies and procedures to help achieve results and ensure stewardship of government resources. Schools cause overpayments when they make processing errors, such as reporting the wrong enrollment dates or billing VA for non-allowable fees. VA estimated that these errors account for 8 percent of high-dollar overpayment cases in fiscal years 2013 and 2014, while we estimated that they account for around $28 million of the $280 million in high-dollar overpayments VA made in fiscal year 2014. School errors also accounted for over half of the overpayment findings in VA’s initial analysis of compliance surveys conducted in fiscal years 2014 and 2015. Similarly, we identified a variety of school errors that resulted in overpayments among the 24 compliance surveys we reviewed. For example, in one compliance survey, the school certifying official did not know how to correctly report the last day of the school term, resulting in overpayments of $7,000, $4,000, and $3,500 for three different students. School officials can also create overpayments when they are unfamiliar with what types of school fees VA allows under the Post-9/11 GI Bill program. For example, in one school compliance survey we reviewed, a school certifying official billed VA $200 for non-allowable book fees for multiple students. At another school, the school certifying official billed VA for a variety of unallowable fees, all of which resulted in overpayments. VA compliance survey specialists used these reviews to educate school officials on VA payment policies and sometimes recommended that the school certifying officials obtain additional training. School officials without adequate training were commonly cited as a source of school errors in our interviews with staff from VA’s Regional Processing Office and Debt Management Center. Regional processing office staff described some common errors made by school certifying officials. For example, some school officials do not adjust the tuition or fee amount when reporting an enrollment change; they submit conflicting information in the comments field; and they sometimes report the wrong withdrawal date (e.g., using the date the form was submitted, rather than when the enrollment change occurred). VA Debt Management Center staff stated that school certifying officials often have trouble reporting enrollment changes correctly and could benefit from additional training on Post-9/11 GI Bill policies. For example, school certifying officials sometimes do not understand that veterans will incur an overpayment for dropping classes after the term has begun and are often confused by VA’s definition of school terms. Compliance survey officials noted that there is a high level of turnover among school certifying officials, and that staff who are new to the job face a steep learning curve. Additionally, a 2014 VA Inspector General report identified multiple school reporting errors and noted that if school certifying officials completed VA’s recommended training, it could improve the timeliness and accuracy of their submitted claims. VA offers a variety of training opportunities for school officials, but VA officials said the agency lacks the authority to require that school certifying officials complete any of its training about how to correctly process enrollment information, despite the fact that the Post-9/11 GI Bill program is complex for schools to administer and that schools caused an estimated $28 million in high-dollar overpayments in fiscal year 2014. In addition to publishing a handbook for school certifying officials, VA offers a 40-hour, self-paced, online course to provide a comprehensive overview of schools’ responsibilities and GI Bill payment processes. Prior GAO and VA Inspector General reports have recommended that VA do more to encourage school certifying officials to take advantage of VA training opportunities. While VA has taken some steps to address these recommendations and officials said they have conducted outreach through conferences and workshops, the number of school officials completing VA’s online training remains low. In 2014, only 29 percent of school certifying officials (666 of 2,259) who accessed this training completed it; the same percentage have completed the training so far in 2015 (358 of 1,218 as of June 2015). According to VA officials, VA lacks direct legal authority to require school certifying officials to complete a minimum level of training on how to implement the program, although they indicated that they would like the ability to do so. However, school officials have essential duties in processing Post-9/11 GI Bill payments and need to possess and maintain a level of competence to do their job, which includes receiving training. This would be consistent with federal internal control standards. In the absence of minimum level training requirements for school officials, VA lacks reasonable assurance that staff with key responsibilities in the payment process know how to avoid creating overpayments. VA causes overpayments when it issues a duplicate payment or makes data entry errors regarding tuition or fee amounts, training time, and other key enrollment data. VA estimated that these errors caused 2 percent of the high-dollar overpayment cases in fiscal years 2013 and 2014 and we estimated that they account for around $6 million of the $280 million in high-dollar overpayments VA made in fiscal year 2014. In one of the case files we reviewed, a veteran had to repay $7,200 in tuition debt because VA erroneously calculated a larger benefit than he was eligible for based on his years of service. In another case, VA sent a tuition payment to the wrong school, which the school then had to repay. In the case of a veteran housing overpayment we reviewed, VA accidentally processed a veteran’s enrollment certification twice, so that it looked like he was enrolled for more classes than he actually was. As a result of this error, VA provided housing payments to the veteran, although he should have been ineligible because he was enrolled less than half-time. The veteran ended up with a $3,000 overpayment when the error was detected. VA has taken steps to address processing errors through technology improvements, quality assurance reviews, and training. For example, agency investments in technology improvements have allowed a proportion of claims to be processed automatically, reducing the possibility of human error, according to VA officials. In fiscal year 2014, VA automatically processed 51 percent of Post-9/11 GI Bill claims. VA also monitors overall payment accuracy at the four regional offices. A quality assurance team reviews a sample of 25 Post-9/11 GI Bill payments from each regional office every quarter. VA officials said any common errors identified during these reviews are addressed through additional training or guidance. In addition, officials from the regional office we visited said that each month they review five claims processed by each employee to monitor payment accuracy. VA also ensures that claims processers stay up-to-date on program policies by requiring 24 hours of refresher training each year. VA relies on mailed letters to notify veterans and schools of overpayments, which can leave some veterans and schools unaware of their debts until VA begins collecting them. Once an overpayment debt is created, VA mails a sequence of letters to notify the responsible veteran or school of their debt. For veterans, these letters are generally sent to the addresses listed on their initial application for Post-9/11 GI Bill benefits, according to VA officials. However, students are a highly transient population and Post-9/11 GI Bill benefits can be used over multiple years, so for many veterans the addresses VA relies on may no longer be accurate, according to VA and school officials. When veterans do not receive notification letters for their debts, they may not know they have a debt, causing them to miss key deadlines. Specifically, VA will not suspend collection actions if a veteran requests a waiver or disputes the amount of a debt more than 30 days after the initial notification letter. Then, if VA begins collecting the debt by offsetting other benefits, such as monthly housing payments, veterans may be unprepared and unable to cover their expenses, potentially creating financial obstacles towards continuing their education, according to officials from veteran service organizations. Although VA does not keep data on undeliverable mail, VA call center staff said they frequently receive calls from veterans who were unaware of their overpayment debts until their federal tax returns were offset, which makes it difficult for these veterans to proactively plan and budget for how to cover their living expenses in light of impending collections. If veterans do not receive a notification letter, these deadlines will pass without the veterans’ knowledge, leaving them with limited recourse to halt collection actions and potentially having negative effects on their credit rating. For schools, VA mails the letters to the school’s central address, leaving the school responsible for directing the letter to the appropriate administrator or office. However, some school administrators told us that these letters sometimes get lost in transit, putting schools at risk of having future federal grants offset for collection. These problems with mailed letters not only create complications for veterans and schools, but they also make it more difficult for VA to collect debts since veterans and schools are less likely to repay their debts in a timely manner if they are unaware that the debts exist. VA is required to notify veterans and schools of any debts in writing. According to federal internal control standards, all agencies should ensure they are using adequate means of communicating with external stakeholders. To effectively communicate with veterans and schools, VA may also need to use other notification methods in addition to mailed letters. For example, administrators at six of the nine schools we interviewed expressed a preference for VA to use electronic rather than mailed correspondence. Students are also generally accustomed to electronic communication, as one school official noted, and could benefit from electronic communication, therefore alleviating potential issues with out-of-date mailing addresses. VA already has existing online portals that could be leveraged to inform veterans and schools about their overpayment debts. VA’s eBenefits portal, for example, already provides over 3 million veterans with access to personalized information about their VA benefits. VA officials said this system could be upgraded to provide veterans with online access to debt notification letters; however, the agency has not implemented this proposal due to other funding priorities. Officials at six of the nine schools we interviewed suggested email as another potential low cost option; however, VA would need a mechanism to ensure it is using up-to-date email addresses. By not pursuing alternative mechanisms to supplement mailed letters, significant numbers of veterans and schools may remain unaware of their overpayment debts and unprepared for the financial consequences, which could also complicate and prolong VA’s collection efforts. VA’s debt notification letters also provide essential information in two separate letters, making it difficult for veterans and schools to reconcile and repay overpayment debts. VA is required to provide debtors with information on the amount of the debt, the reason for the debt, their right to dispute the debt or request a waiver, and how to repay it. However, VA conveys this information in two separate letters, neither of which contains all the information a veteran or school needs to understand the overpayment and how to repay it (see fig. 8). For example, VA’s regional offices mail veterans an initial notification letter which includes information on the amount and cause of their overpayment debt. However, it is not until more than 30 days later that VA’s debt collection office mails the veteran a second letter with information on how to repay the debt, according to VA officials. Since each of these letters only provides veterans with half of the information they need, they can create confusion. Specifically, VA debt collection staff said they frequently receive questions from veterans who are confused because information on the cause of overpayments and the collection process is conveyed in separate letters. For example, officials from the debt collection call center said one of the most common questions they receive from veterans is to ask why their debt was established, since this information is not included in the second letter veterans receive from the debt collection office. This delay in receiving all of the information associated with their debt could also delay the collection process since veterans may be less likely to repay their debts until they understand both the cause of overpayments and have the information they need on how to repay them. VA similarly mails this information to schools in two separate letters, and school officials told us this creates some confusion since they must wait to repay any overpayment debts until they can match up both letters from VA, which can also delay collections. One school administrator told us that it can be particularly difficult to keep track of the two separate letters at larger schools that enroll thousands of veterans, so they had to hire two new staff to deal with the administrative burden of researching overpayment debts. VA officials said the initial notification letters are sent separately from the debt collection letters to promptly notify veterans about their debts and to allow veterans time to dispute them. Although this may justify sending two separate letters, it would still help avoid confusion for veterans and schools if the later debt collection letters included information on both the cause of the overpayment and repayment options. The collection letters mailed by VA’s debt collection office do not include details on the cause of debts because VA’s regional offices do not currently share this information when referring debts for collection. Officials from VA’s debt collection office told us they would like to include some basic information on the cause of debts in their letters, such as the term dates associated with a veteran withdrawal. However, VA officials responsible for administering the Post-9/11 GI Bill said the cause of the overpayments is already clearly communicated in the initial notification letters. Nevertheless, this should not preclude the regional processing offices from sharing basic information on the causes of overpayments so it could be included in subsequent letters along with information on how to repay debts. Given VA’s regulatory requirement to provide students and schools with information on both the cause of their debts and how to repay them, VA could improve its efforts to convey this information. This would also be consistent with federal internal control standards for information sharing. The current lack of information sharing between offices limits VA’s ability to communicate both the overpayment cause and repayment options in at least one of the letters VA sends to veterans and schools. This process can lead to confusion and cause delays in veterans and schools repaying their debts in a timely manner. VA’s formula for prorating overpayments gives veterans credit for extra days of attendance after they drop a class, thereby reducing the amount subject to collection. When a Post-9/11 GI Bill beneficiary drops a class during the term, VA prorates the resulting overpayment as though the veteran attended class through the end of the month rather than using the actual date of the withdrawal. For example, if a student drops a class on September 1, VA would prorate the overpayment amount as though the student had been enrolled through September 30. This in effect credits students for up to 30 extra days of classes they did not attend, which can reduce the overpayment amount subject to collection by hundreds of dollars per veteran (see fig. 9). VA officials said this policy was designed for monthly housing benefits, and then applied to tuition benefits that are paid separately under the Post-9/11 GI Bill. Although crediting a student for a full month of enrollment may be appropriate for benefits that are paid on a monthly basis, such as housing, it is less appropriate for benefits that are paid as up-front lump sums, such as tuition, particularly since the law stipulates that education benefits shall only be paid for the period of time during which the veteran is enrolled. Since VA’s overpayment calculation is crediting veterans for school days they did not attend, it is inappropriately increasing the cost of the program. In addition, VA’s formula for prorating overpayment amounts does not account for schools’ own internal refund policies, and can sometimes result in veterans receiving surplus funds that VA is not collecting. Some schools have fairly generous tuition refund policies when students drop a class or withdraw from school early in the term. In these cases, a school may send the veteran a tuition refund that is larger than the overpayment amount the veteran owes to VA, leaving the veteran with a potential financial gain. For example, one of the community colleges we examined provided a 100 percent tuition refund if a student withdrew within the first two and a half weeks of the term, so a veteran withdrawing from school would receive a full refund of $2,100 from the school. However, VA will create an overpayment debt of $1,750 for this veteran since they had attended two and half weeks of the 15 week term before withdrawing. This would leave the veteran in this example with an extra $350 after repaying their overpayment debt. The excess tuition payments VA is not collecting are even larger at schools with higher tuition rates (see fig. 10). Three of the nine schools included in our review had institutional refund policies that provided all students with a 100 percent refund for withdrawals within the first two or three weeks of the term. Officials at one of these schools estimated that the difference between their refund policy and VA’s overpayment calculations had resulted in over $136,000 in excess tuition payments for 53 veterans between 2009 and 2014, averaging over $2,500 per veteran. These officials said they had attempted to return these excess funds to VA, but VA would not accept them. Officials at another school said they attempt to return any excess funds to VA three times, but since VA usually will not accept these funds, the school eventually just gives them to the veteran. VA officials told us they cannot accept funds in excess of the overpayment debt that is billed to veterans and said that some of these situations might occur when schools do not correctly account for school refund policies when reporting enrollment and tuition changes. The Post-9/11 GI Bill’s authorizing statute specifies that benefits are only payable in an amount equal to the actual cost of tuition and fees charged by the institution, but in these cases VA’s tuition and fees payments exceed the amounts charged by the schools once the refund policies are accounted for—that is, VA is making payments for tuition amounts that were not charged by the school. As a result, VA is overpaying for tuition and these excess funds are being retained by schools or returned to veterans rather than collected by VA. The Post-9/11 GI Bill has provided valuable education benefits to millions of veterans, but the program’s structure of collecting debts directly from veterans creates certain risks. Although the program could have been designed differently from the beginning to avoid some of the subsequent problems with overpayments, at this point, a complete overhaul, such as collecting tuition and fee overpayments directly from schools, would require significant restructuring of VA’s payment operations as well as legislative changes. However, current problems with overpayments can still be addressed through process improvements and proactive management on the part of VA. For example, VA currently collects and monitors only limited data on Post-9/11 GI Bill overpayments and collections, overlooking the most common types of overpayments and collections. Overpayments increased to over $400 million in fiscal year 2014, affecting approximately one in four beneficiaries. Effective program management requires monitoring of key data elements. Unless VA expands its monitoring of overpayment debts and collections, it will not be able to ensure that it is taking appropriate steps to safeguard taxpayer funds. Overpayments are an inevitable byproduct of the Post-9/11 GI Bill since some veteran enrollment changes are to be expected. However, there are ways that VA can reduce the number and amount of future overpayments. For example, if VA provided more information to veterans about potential overpayment debts, veterans would better understand the financial consequences of dropping a class or withdrawing from school and could take steps to avoid some overpayments. In addition, VA provides schools with an optional process for certifying tuition and fees that could reduce the effect of enrollment changes, although it does not explain in its guidance to schools the potential advantages of this process. Moreover, since veterans using the Post-9/11 GI Bill are not required to regularly verify their enrollment, overpayments from enrollment changes are also magnified by any delays in reporting these changes to VA since veterans continue to receive monthly housing payments. These potentially avoidable overpayments will continue to occur unless VA proactively addresses issues associated with enrollment changes. Similarly, unnecessary school reporting errors will lead to overpayments if school certifying officials do not receive appropriate training to understand VA’s payment and reporting processes. Although VA offers online training that would address these issues, school officials are not required to take part in this minimum level of training, because VA officials believe the agency does not have the statutory authority to require them to do so. VA is responsible for recovering all debts in an efficient and effective manner, but these efforts are hampered by the processes VA uses to notify and collect debts from veterans and schools. Mailed letters alone are not an effective method of notifying students and schools, particularly since other electronic options are also available. This can leave veterans unaware of their debts and create financial hardships for them when VA offsets other income sources to collect these debts. In addition, the lack of a single source of information on both the cause of debts and repayment options creates unnecessary confusion for veterans that can lead to delays in repayment as well as an administrative burden for schools. Finally, VA also needs to ensure that it is appropriately calculating overpayment debts in accordance with the law and recovers any excess payments. However, VA’s current method of calculating overpayments credits veterans for extra days of attendance and does not account for school refund policies, which unnecessarily increases the cost of the program. To address Post-9/11 GI Bill overpayments resulting from school errors, Congress should consider granting VA explicit authority to require a minimum level of training for appropriate school officials. To improve the administration of the Post-9/11 GI Bill, reduce the occurrence of overpayments, and increase debt collections, we recommend that the Secretary of Veterans Affairs take the following eight actions: Improve program management by: Expanding monitoring of available information on overpayment debts and collections. This could include regularly tracking the number and amount of overpayments created and the effectiveness of collection efforts. Address overpayments resulting from enrollment changes by: Providing guidance to educate student veterans about their benefits and consequences of changing their enrollment. Providing guidance to schools about the benefits of using a dual certification process where schools wait to certify the actual tuition and fee amounts until after the school’s deadline for adding and dropping classes. Identifying and implementing a cost-effective way to allow Post-9/11 GI Bill beneficiaries to verify their enrollment status each month, and require monthly reporting. Improve efforts to notify veterans and schools about overpayment debts by: Identifying and implementing other methods of notifying veterans and schools about debts to supplement the agency’s mailed notices (e.g., email, eBenefits). Including information on both the cause of the debt and how to repay it in debt letters. Revise policy for calculating overpayments to increase collections by prorating tuition overpayments when veterans reduce their enrollment during the term based on the actual date of the enrollment change rather than paying additional benefits through the end of the month during which the reduction occurred. Ensure it is recovering the full amount of tuition and fee payments if a school does not charge a veteran for any tuition or fees after dropping a class or withdrawing from school. For example, VA could adjust its overpayment calculation to account for these situations or provide schools with guidance on how to account for school refund policies when reporting enrollment and tuition changes. We provided a draft of this report to VA for review and comment and received a written response, which is reproduced in appendix II. VA agreed with each of our recommendations and identified steps it plans to take to implement them. To expand monitoring of overpayments and collections, VA plans to develop recurring reports to identify trends and areas for improvements. To address overpayments resulting from enrollment changes, VA plans to provide information on the consequences of enrollment changes in benefit letters and veteran guidance, expand outreach to schools about dual certification, and develop a system for verifying veterans’ monthly enrollment. To address collection issues, VA plans to pursue additional methods of notifying veterans about overpayment debts, include information on the cause of debts and how to repay them in notification letters to schools and veterans, and adjust its regulations and procedures for prorating overpayments and accounting for school refund policies. VA also provided technical comments that we incorporated, as appropriate. We also provided selected portions of the draft to the Department of Education for review; the department did not have any comments. We are sending copies of this report to the appropriate congressional committees; the Secretary of Veterans Affairs; the Secretary of the Department of Education; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of this report were to examine: (1) what is known about the extent of Post-9/11 GI Bill overpayments and collections, and how effectively the Department of Veterans Affairs’ (VA) monitors them, (2) the effectiveness of VA’s efforts to address the causes of overpayments, and (3) how effectively VA’s policies and procedures support the collection of overpayments. We reviewed relevant federal laws, regulations, Office of Management and Budget circulars, and federal standards related to financial management and the Post-9/11 GI Bill. We also reviewed documents and guidance from VA. This included claims processing and debt collection manuals, and examples of correspondence with schools and beneficiaries. We also analyzed VA’s monitoring of overpayments and collections by reviewing its internal data tracking and relevant public reports, such as VA’s annual Performance and Accountability Report and Quarterly High-Dollar Overpayment Reports, and assessed these efforts against guidance for government receivables in OMB Circular A-129. We interviewed senior officials from VA about the process for establishing and collecting Post-9/11 GI Bill overpayments. We also visited one of VA’s four Regional Processing Offices in Muskogee, OK for interviews with management and frontline staff. We selected this location because it is co-located with VA’s Education Call Center. During this site visit we met with claims processers, telephone representatives, and management to discuss the causes of overpayments and VA’s actions to address them. We also visited VA’s debt collection office, the Debt Management Center, which is located near St. Paul, MN. During this site visit we interviewed management and frontline staff about the various mechanisms and timelines they use to collect overpayment debts. We assessed VA’s efforts to address the causes of overpayments and collect overpayment debts against the key requirements in the Post-9/11 GI Bill’s statute and regulations, and government standards for internal controls. As a point of comparison, we reviewed selected documents and interviewed officials from the Department of Education about the collection policies and procedures for federal student aid programs. To examine the extent of overpayments and collections, we reviewed available Post-9/11 GI Bill financial data from VA. We primarily focused on overpayments that originated in fiscal years 2013 and 2014, since VA’s data systems only maintain overpayment records in an accessible format for 2 years once the debts have been repaid. We reviewed summary and record level data on the frequency, type, and amount of overpayments and collections for these two fiscal years. We also analyzed data on all outstanding debts dating back to the start of the Post-9/11 GI Bill in 2009. We assessed the reliability of these data by reviewing VA’s reporting systems and conducting electronic testing of the underlying data, and we determined that the data were sufficiently reliable for our reporting purposes. To identify the causes of overpayments, we reviewed VA’s Quarterly High-Dollar Overpayment Reports and supporting documentation for fiscal years 2013 and 2014 because they are VA’s only source of generalizable data on overpayment causes. For these reports, VA randomly selected a sample of 251 overpayments each quarter that were over $1,667 and reviewed each claims file to identify the cause of the overpayment. These samples were all drawn from overpayments for any VA education program and we estimated Post-9/11 GI Bill overpayments as a subgroup analysis, which accounted for 1,710 of the 2,008 overpayments reviewed in fiscal years 2013 and 2014. We analyzed these reports to calculate the types and frequency of issues that created overpayments and report aggregate data on high-dollar overpayments. We used methods appropriate for a stratified random sample, using weights and stratification that reflect a quarterly sample design. We convey the sampling error in the form of confidence intervals at the 95 percent confidence level. All results from a statistical sample are subject to sampling error that would result if a different randomly selected set of units from the same population had been selected. If the same sampling procedure were repeated many times, we would expect the 95 percent confidence intervals for an estimate to contain the true population value in about 95 out of 100 samples. For the last fiscal year of 2014, the number of high-dollar overpayments (the population size) was not known. We assumed it is the same as that of the previous quarter, quarter 3, and assured our estimates were not sensitive to this assumption. Specifically, we compared estimates for high-dollar amounts and overpayment causes while under three assumed scenarios for the number of high-dollar overpayments for the last quarter of fiscal year 2014: (1) the same as the previous quarter; (2) the same as the smallest observed in fiscal years 2013 and 2014; and (3) the same as the largest observed in fiscal years 2013 and 2014. Results did not substantively differ. We assessed the reliability of the data in these quarterly reports through our case file reviews and by interviewing VA officials about the processes and systems they use to develop the reports, and we determined that the data were sufficiently reliable for our reporting purposes. We conducted an in-depth case file review of 20 overpayment cases to identify specific examples of how overpayments are created and calculated and to help assess the reliability of VA’s data systems for reporting on overpayments. We selected this non-generalizable sample from the 251 cases sampled in VA’s high-dollar overpayment report from the first quarter report for fiscal year 2014. From the 251 cases, we selected all four caused by VA error and randomly selected 6 of the 21 caused by school errors. We then randomly selected a sample of 10 of the 226 overpayments caused by student enrollment changes (see table 2). To review these cases, we analyzed claims processing documents, school reporting forms, and debt notification letters to determine the circumstances that created the overpayment. We also interviewed VA officials responsible for analyzing the cases for quarterly reports about the process and quality control measures. We also examined other VA data sources for information on overpayments. For example, we reviewed available summary data and a selection of VA’s school compliance surveys to identify instances of school reporting errors, reviewed a selection of 24 school compliance surveys to identify examples of school reporting errors, and reviewed veteran complaints about overpayment issues submitted through VA’s GI Bill Feedback system. We interviewed administrators at nine institutions that enroll Post-9/11 GI Bill beneficiaries about their experiences with overpayments. We selected this nongeneralizable sample to include a mix of program lengths, sectors (public, private nonprofit, and private for-profit), schools with student veteran populations ranging from 14 to more than 14,000, and regions representing all four of VA’s Regional Processing Offices, which process Post-9/11 GI Bill claims (see table 3). We interviewed representatives from several veteran service organizations and higher education organizations to obtain their perspective on the causes and effects of overpayments. These organizations included Student Veterans of America, Iraq and Afghanistan Veterans of America, the National Association of Veterans Program Administrators, Veterans of Foreign Wars, The American Legion, American Council on Education, National Association of College and University Business Offices, and the American Association of Collegiate Registrars and Admissions Officers. We also interviewed directors from three state approving agencies—state agencies that are responsible for reviewing and approving schools for participation in VA education programs—about the findings from compliance surveys they conduct to assess whether schools are adhering to applicable laws and regulations. We conducted this performance audit from April 2014 to October 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, individuals making key contributions to this report were Michelle St. Pierre, Assistant Director; William Colvin, Analyst-in-Charge; Jennifer Cook; and Karen L. Cassidy. In addition, key support was provided by Julia DiPonio, Kathy Leslie, Ying Long, Phillip McIntyre, Sheila McCoy, Mimi Nguyen, Ronni Schwartz, Barbara Steel-Lowney, Walter Vance, Sonya Vartivarian, Charlie Willson, and Craig Winslow.
|
VA provided $10.8 billion in Post-9/11 GI Bill education benefits to almost 800,000 veterans in fiscal year 2014. GAO was asked to review overpayments for the program, which can create financial hardships for veterans who are generally required to pay them back and which can result in a significant loss of taxpayer dollars if they are not collected. This report examines (1) the extent of overpayments, (2) how effectively VA has addressed their causes, and (3) the effectiveness of VA's collection efforts. GAO analyzed overpayment data for fiscal years 2013 and 2014, examined the causes from a generalizable sample of high-dollar overpayments (greater than $1,667), conducted a case file review of 20 overpayments (selected for a variety of causes), and reviewed VA's monitoring of overpayments. GAO also interviewed senior and frontline staff at two VA offices that process claims and collect debts, officials at nine schools (selected for variation in program length and their status as public, nonprofit, and for-profit), higher education associations, and veteran service organizations. The Department of Veterans Affairs (VA) identified $416 million in Post-9/11 GI Bill overpayments in fiscal year 2014, affecting approximately one in four veteran beneficiaries and about 6,000 schools. Overpayments most often occur when VA pays benefits based on a student's enrollment at the beginning of the school term and the student later drops one or more classes (or withdraws from school altogether). Students therefore receive benefits for classes they did not complete, and the “overpayment” must be paid back to VA. A small percentage of overpayments occurred because of school reporting or VA processing errors. GAO found that most overpayments were collected quickly, but as of November 2014 (when VA provided these data to GAO), VA was still collecting $152 million in overpayments from fiscal year 2014, and an additional $110 million from prior years, primarily owed by veterans with the remainder owed by schools. Inadequate guidance, processes, and training have limited VA's efforts to reduce overpayments caused by enrollment changes and school errors. Guidance for veterans. Many veterans may not realize they can incur overpayments as a result of enrollment changes because VA provides limited guidance to veterans on its policies. As a result, veterans may be unaware of the consequences of enrollment changes until after they have already incurred their first overpayment debt, according to school officials. Because VA is not effectively communicating its program policies to veterans, some veterans may be incurring debts that they could have otherwise avoided. Enrollment verification process. While veterans using other VA education programs have to verify their enrollment each month, VA generally does not require those using the Post-9/11 GI Bill to do so. By not requiring veterans to verify their enrollment every month, which can cause significant time to lapse between when veterans drop courses and when this is reported, VA's process allows veterans to incur thousands of dollars in overpayments and also increases the program's costs associated with collecting these debts. Training for school officials. Overpayments also occur when schools make errors, such as reporting enrollment information incorrectly, which VA officials said is sometimes attributable to a lack of training. For example, some school officials routinely made systematic errors reporting enrollment information, creating thousands of dollars in overpayments. Not all school officials attend the different training opportunities VA offers and VA officials said the agency lacks the authority to require school officials to participate in any of them. VA officials said they would like school officials to take a minimum level of training, which could help reduce errors and related overpayments. The effectiveness of VA's collection efforts is hindered by its notification methods. VA relies solely on paper mail to notify schools and veterans of overpayments. VA generally sends veterans' notices to the addresses from veterans' initial benefit applications. However, these addresses can often be out-of-date, so some veterans do not receive the letters, leaving them unaware of their debts. This can cause veterans to unknowingly miss deadlines for disputing their debts and leave them unprepared to cover living expenses if VA begins withholding future benefit payments or offsetting tax returns for collection. This can also lead to delays in the collection of overpayments from veterans. Congress should consider granting VA explicit authority to require training for school officials. In addition, GAO is making a number of recommendations to improve VA's guidance and processes, including providing program guidance to veterans, verifying veterans' monthly enrollment, and developing additional debt notification methods. VA agreed with GAO's recommendations to the agency and plans to address these issues.
|
GPRA is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. Under GPRA, annual performance plans are to clearly inform the Congress and the public of (1) the annual performance goals for agencies’ major programs and activities, (2) the measures that will be used to gauge performance, (3) the strategies and resources required to achieve the performance goals, and (4) the procedures that will be used to verify and validate performance information. These annual plans, issued soon after transmittal of the President’s budget, provide a direct linkage between an agency’s longer-term goals and mission contained in its strategic plan and its day-to-day activities. Annual performance reports are to subsequently report on the degree to which performance goals were met. The issuance of the agencies’ performance reports, due by March 31, represents a newer and potentially more substantive phase in the implementation of GPRA— the opportunity to assess federal agencies’ actual performance for the prior fiscal year and to consider what steps are needed to improve performance and reduce costs in the future. Education’s mission is to ensure equal access to education and to promote educational excellence throughout the nation. This year’s interim performance report listed the following four department-wide strategic goals: (1) help all children reach challenging academic standards so that they are prepared for responsible citizenship, further learning, and productive employment; (2) build a solid foundation for learning for all children; (3) ensure access to postsecondary education and lifelong learning; and (4) make Education a high-performance organization by focusing on results, service quality, and customer satisfaction. For each goal, Education has established various objectives, and for each objective there were indicators with which to measure its progress. Additionally, in the interim fiscal year 2000 program performance report, Education presents program data for about 180 Education programs. This program report includes some information on planned fiscal year 2002 performance (e.g., program-level targeted goals) and a section explaining how Education plans to address some of the management challenges identified by GAO and others. Where applicable, in addition to the information in the interim performance report, we used indicator data from the interim program performance report to help assess Education’s progress in achieving the selected outcomes. Additionally, we used the management challenges section to help identify progress and planned activities in this area. This section discusses our analysis of Education’s performance in achieving the six selected key outcomes. We will not be able to discuss strategies that Education has in place to achieve these outcomes because Education has not yet provided this information. Education officials told us that these strategies will be included in the fiscal year 2002 department- wide performance plan scheduled to be issued by September 30, 2001. In discussing the extent to which the agency provided assurance that the performance information it is reporting is credible, we have drawn information from our prior work. Additionally, due to the nature of the performance information it was difficult to assess Education’s progress for some outcomes. To measure success in some areas, Education relied on long-term trend data that is collected only every 2, 3, 4, or 6 years. These gaps in data make a full analysis of Education’s progress difficult. In January of this year, we reported that Education needed to improve the quality and timeliness of the data on which its programs are evaluated. Without taking this step, Education will continue to be challenged in assessing its progress for the selected outcomes on an annual basis. The interim report showed that Education made little progress in achieving this outcome. Education has seven performance objectives and 35 indicators to measure progress toward achieving this outcome. Of the 35 indicators related to this outcome, fiscal year 2000 data were only available for nine indicators. Due to the lack of fiscal year 2000 data for this outcome, we limited our analysis to these nine indicators and found that Education made little progress toward this outcome. Specifically, Education reported that it had not met two goals related to challenging content and student performance standards—goals that are directly related to the achievement of the outcome. As a first step, before all students can reach challenging standards that prepare them for responsible citizenship, further learning, and productive employment, challenging content and student performance standards must be in place. One of the 35 performance indicators, “by the end of the 1997-98 school year, all states will have challenging content and student performance standards in place for two or more core subjects,” focuses on this prerequisite, which had not been achieved by the end of 2000. Twenty- seven states and Puerto Rico had demonstrated to Education that they had completed the development of both content and student performance standards. Education had approved the content standards development process for the District of Columbia, Puerto Rico, and all states, except one. In its assessment of progress for this indicator, Education explains that rather than developing student performance standards as a template for assessments, which were not scheduled to be in place until the 2000-01 school year, many states are developing their assessment instruments first and then constructing performance standards on the basis of pilot tests of their assessments. In looking at the remaining seven indicators with data, Education reported that the goal was met in three instances and not met in the other four. All three of the indicators with goals met are linked to the outcome, but provide only limited information with which to judge progress. For all six indicators with unmet goals, there was minimal discussion of why the goals were not met. The interim report showed that little progress has been demonstrated for this outcome. Education established 18 indicators to address the goal of building a solid foundation for learning for all children. According to the interim performance report, data for fiscal year 2000 were only available for two indicators—for both of which the goals were met. According to Education officials, 2000 data will be provided in the future as data become available from the states. However, these two indicators alone, “number of tutors in the America Reads program” and “that more than 35 percent of Title I schools adopt a researched-based way to improve curriculum,” do not provide sufficient information with which to gauge progress toward meeting the outcome. As with of some of the other outcomes, data necessary to evaluate progress is not collected annually, making an annual assessment of progress difficult. According to the interim report, Education’s performance objective to have greater public school choice available to students and families has been at least partially met. The interim fiscal year 2000 performance report indicated that the interim target for one of the three indicators—that by 2002, there will be 3,000 charter schools in operation around the nation— was exceeded. The interim target for fiscal year 2000 relating to this indicator was 2,060 charter schools; 2,110 were actually in operation. However, the report notes that the majority of the charter schools are located in only seven states. According to the interim performance report, data for fiscal year 2000 are not yet available for one indicator—that by 2003, 25% of all public school students in grades K-12 will attend a school that they or their parents have chosen. For the third indicator—that by 2000, a minimum of 40 states will have charter school legislation—the goal was not met. From 1991, when Minnesota became the first state to enact charter school legislation, other states joined steadily until 1999, when the list totaled 38 and remained at 38 through 2000. There was no discussion on why the goal was not met, or why only 40 states are included in the goal. According to the interim report, there was limited progress in meeting this outcome. Education measures progress for this outcome by looking at the national trends in student drug and alcohol use, including in-school use, and national trends in student victimization and violent incidents in schools. Of the four indicators, Education expects progress in three based on national drug use and violent crime trends, and the goal was partially met for one—reducing the prevalence of past-month use of illicit drugs. Specifically, we have the following comments on these indicators: Of the four indicators, two are for measuring violent behavior and two are for drug and alcohol use. For both violent behavior indicators, data is not available for fiscal year 2000; however, Education has concluded that progress is likely. In making this determination, Education is using national statistics demonstrating that there has been a decrease in the overall juvenile crime and violence rates since the mid-1990s. The data for one indicator—the level of disorder in schools—tracks only physical fights on school property; no reasons were given as to why other disciplinary problems were not tracked. However, our recent report on discipline showed that fistfights are the most prevalent form of serious misconduct and, therefore, probably the best proxy measure to use when only one behavior is being tracked. According to Education, data should be available within the next few years to measure actual progress for both indicators for this outcome. From the indicators on drug and alcohol use, it appears that progress is mixed. Education did not meet its goal for past-month alcohol and illicit drug use for 2000; however, Education reports that alcohol use levels have remained relatively steady for years and stated that illicit drug use may have leveled off in recent years, according to national trend data. For the second indicator—rates of in-school alcohol and drug use will begin to fall by 2001—Education reports that progress toward this goal is likely. It based its estimation of success on the fact that the goals for both alcohol and drug use for 1999 were exceeded (even beyond the 2000 goal level) and that overall alcohol and drug use rates have remained steady for years. However, as we noted in last year’s report, Education is using alcohol and marijuana use by 12th graders as a proxy for all alcohol and drug use, respectively. Education does not provide an explanation as to why only marijuana was used for this indicator. In response to our report last year, Education acknowledged that we were correct in our observation that the indicator is narrow in scope and stated that it intended to address this in its fiscal year 2002 plan. Education did not establish a fiscal year 2000 performance goal or objective to specifically address this outcome. In a section designed to address management challenges facing Education, the interim performance report does report progress towards achieving a related objective: management of department programs and services ensures financial integrity. This is presented as a department-wide objective and does not discuss progress or performance for any specific programs. The discussion of this objective lists several actions Education’s Office of the Chief Financial Officer is taking to accomplish it. These actions include implementing a new general ledger software system; enhancing internal controls, reconciliation, and reporting processes; and improving acquisition systems. According to Education officials, future performance plans may include goals and measures to specifically address this outcome. Additionally, Education has revised its strategic plan to include an objective of ensuring financial integrity within the department. Based on the two indicators identified for the financial integrity objective, progress in achieving this objective is mixed. The first indicator, that Education will receive an unqualified opinion on its fiscal year 2000 financial statement audit, was unmet. However, because the auditors identified fewer material weaknesses and reportable conditions related to Education’s internal control systems than they found in last year’s audit, Education states that it is making progress. For the second indicator, Education reported that it achieved its target for increasing the use of performance contracts. In the management challenges section of the interim report, Education established a target to remove the student financial assistance programs from GAO’s high-risk list. Education lists several actions, such as developing a corrective action plan, to address program weaknesses. There were, however, no specific goals or measures for this challenge. As we reported to you last year in our assessment of Education’s fiscal year 2001 performance plan, we continue to believe the department should have a goal or objective to specifically address this outcome. The student aid programs remain on GAO’s high risk-list, and we recently testified on serious internal control weaknesses we identified in a review of the department’s payment practices. For example, we stated Education had poor segregation of duties for making payments because some individuals at the department could control the entire payment process—leaving Education at risk for fraud. Also, we cited the need for Education to have better controls over its process for reviewing and approving purchases made with government purchase cards. In addition to establishing the target of getting off of GAO’s high-risk list, Education created a task force—or Management Improvement Team—to achieve this result. Among other things, the task force is charged with (1) obtaining a clean audit opinion, (2) removing the student financial aid programs from GAO’s high-risk list, (3) putting in place an effective system of internal controls to protect against waste, fraud, and abuse, and (4) continuing to modernize student aid delivery and management. The interim report shows that Education’s progress in meeting this goal was mixed. One of Education’s four strategic goals is to ensure access to postsecondary education and lifelong learning. We examined the four objectives and 16 indicators for this goal to determine progress in meeting this outcome. Education used a combination of enrollment rates, amount of unmet financial need, customer satisfaction, and rates of employment to measure its progress for this goal. Of the 16 indicators, we found that Education had met or exceeded its targets for five. For the remaining 11 indicators, however, there were no fiscal year 2000 data available to measure actual progress. We have the following observations on the indicators: Education had two objectives and nine indicators to measure progress in the areas of ensuring access to postsecondary education. Indicators include enrollment rates, rates at which parents and students request/receive information on admission standards and financial aid, and the amount of unmet financial need that exists for students. In general, the indicators present a mixed picture of Education’s success in achieving this outcome. Of the nine indicators, the goal was met or exceeded for two; no data were available for the remaining seven indicators. Additionally, in looking specifically at the indicators, we found for one indicator classified as goal met—participants receiving support services—the data were from 1997 and the report did not discuss any planned updates for more current data. For Education’s objective of delivering student aid in an efficient, financially sound, and customer-responsive manner, there were no unmet indicators. For this objective’s three indicators, Education reported that the Office of Student Financial Assistance (OSFA) either met or exceeded its target. For example, in measuring customer satisfaction with OSFA’s products and services, Education found that, not only was the indicator of improving OSFA’s rating met, but the office was only one percentage point away from meeting its multi-year target of a customer satisfaction rating comparable to the private financial services sector average. Education used four indicators to measure its progress toward meeting the objective that all educationally disadvantaged adults can strengthen their literacy skills and improve their earning power over their lifetime through lifelong learning. No fiscal year 2000 data were available for these indicators. In addition to the lack of current data, the data for all four were limited because information is collected and reported by state and local service providers and in some instances, there is no independent verification of the data. For the selected key outcomes, this section describes major improvements or remaining weaknesses in Education’s interim fiscal year 2000 performance report in comparison with its fiscal year 1999 report. One prominent weakness in last year’s performance report—a lack of data for the reporting period—continues in the interim fiscal year 2000 report. Specifically, Education did not have fiscal year 2000 performance data for over three-fourths of the goals associated with the outcomes we looked at. According to Education officials, 2000 data will be provided in the future as data become available. The biggest difference in the reports is the lack of a discussion on how Education plans to achieve its objectives and unmet goals. Additionally, only limited explanations were given as to why goals were unmet. Education officials told us that they did not want to pursue a planning effort—including activities related to how the department plans to achieve its objectives—until senior leadership has been appointed and the President’s education proposal is passed into law. Instead, as stated earlier, Education wants to wait and incorporate any changes in departmental strategies in the final fiscal year 2002 performance plan scheduled to be issued by September 30, 2001. The biggest improvement to this year’s report is the addition of a section dealing with some of the major management challenges. In this section, Education discussed over half of the 14 major management challenges facing the department as identified by GAO and Education’s OIG. According to Education officials, it decided not to report on those management challenges for which plans from the new administration might affect the strategy for addressing the challenge. More specifically, Education addressed those challenges for which the course of action would be the same regardless of the department’s leadership or the contents of new education legislation. For example, one management challenge that Education addressed was to ensure financial integrity; this needed to be addressed no matter who leads the agency or what is included in the President’s education proposal. Conversely, Education wanted to wait to address the management challenge to promote coordination with other federal agencies and school districts to help build a solid foundation of learning for all children. According to a departmental official, Education plans on integrating actions needed to address this challenge with actions needed to address other proposed initiatives from the new administration. In general, the section was helpful in that it outlined the scope of the challenges, identified some performance indicators to be used to assess progress in meeting the challenges, and detailed some strategies to address the challenges. Of the eight challenges discussed in the section, there was a range of thoroughness with which the challenges were addressed. For example, some challenges were mentioned briefly with a short discussion of the status of the challenge and no discussion of goals or measures; other challenges were discussed in-depth with comprehensive discussions of strategies and detailed goals and measures set out. Education officials told us that they plan on addressing more of the challenges in the department-wide performance plan scheduled to be issued September 30, 2001. GAO has identified two governmentwide high-risk areas: strategic human capital management and information security. Regarding strategic human capital management, we found that Education’s interim performance report did have some limited data related to human capital, but that the interim report did not explain its progress in resolving human capital challenges. For example, Education reported on the percentage of managers who believe their staff possess adequate skills for their jobs; however, there was no broader discussion of strategic human capital management such as leadership continuity and succession planning. There was no discussion of strategic human capital management in last year’s performance report or plan—no specific goals, measure, or strategies. While noting that the department has addressed strategic human capital management issues to a limited extent in the agency’s revised strategic plan, Education officials told us that more needs to be done by the department to address this serious issue. With respect to information security, we found that Education’s performance report noted that the department had recently updated security plans and performed security reviews on almost all mission critical systems. Additionally, in the management challenges section of the interim program performance report, Education included management and performance goals for completing specific security activities. For example, the report states that 100 percent of the department’s mission critical systems will have security plans and tested contingency backup plans; however, no dates were associated with these measures. In addition to these governmentwide challenges, GAO has identified four major management challenges facing Education, that generally encompass some of the outcomes discussed in this report: improving financial management to help build a high-performing agency; ensuring access to postsecondary education while reducing the vulnerability of student aid to fraud, waste, error, and mismanagement; encouraging states to improve performance information; and promoting coordination with other federal agencies and school districts to help build a solid foundation of learning for all children. Education’s performance report discussed the department’s progress in meeting the first two of these challenges. Additionally, Education officials told us that the Secretary established the Management Improvement Team to develop a plan to address Education’s management challenges. Further, these officials said that the department has discussed some of these issues in its revised strategic plan. Education will continue to be challenged to improve its performance. In general, given the lack of performance data, explanations, and strategies to meet unmet goals in the future, it was difficult to assess progress. Also, we could not assess planned progress given the lack of a performance plan. Specifically, we found that it was difficult to assess Education’s progress in achieving the six outcomes due to the lack of fiscal year 2000 data for many of its indicators. The non-annual reporting structure of many studies used for Education’s goals make the lack of fiscal year data a perennial problem in addressing Education’s progress on an annual basis. Education will continue to have difficulty in fulfilling its task of annual reporting given the large gaps in reportable annual data. Consistent with our findings in reviewing Education’s performance report from last year, we found that Education had no goals or measures associated with the outcome of preventing fraud, waste, mismanagement, and error in the student financial assistance programs. We put the student financial assistance programs on our high-risk list because they are vulnerable to fraud, waste, abuse, and mismanagement. While OSFA has established a target of being removed from GAO’s high-risk list, there were no corresponding goals or measures in the department’s interim report. However, Education has revised its strategic plan to incorporate an objective of ensuring financial integrity within the department. Education officials also told us that they may include in future performance plans specific goals and measures related to this outcome. Finally, in last year’s assessment of Education’s performance plan and report, we noted that there was no discussion of how human capital would have supported achievement of the outcomes. We found that similarly for this year, there was no discussion in the interim report on strategic human capital management. To improve Education’s future performance reports and plans, we recommend that the Secretary of Education take the following actions: Initiate a dialogue with the appropriate congressional committees to discuss the lack of annual reporting data and what this means with respect to how management at the department is most appropriately assessed and how Education could be more responsive to Congress in fulfilling its annual GPRA reporting requirements. Develop performance goals and measures to address the outcome of less fraud, waste, mismanagement, and error in student financial assistance programs. Develop specific goals and measures to be included in future performance reports and plans to address the issue of strategic human capital management. As agreed, our evaluation was generally based on the requirements of GPRA, the Reports Consolidation Act of 2000, guidance to agencies from the Office of Management and Budget (OMB) for developing performance plans and reports (OMB Circular A-11, Part 2), previous reports and evaluations by us and others, our knowledge of Education’s operations and programs, GAO’s identification of best practices concerning performance planning and reporting, and our observations on Education’s other GPRA-related efforts. We also discussed our review with agency officials in the department and with the department of Education’s OIG. The agency outcomes that were used as the basis for our review were identified by the Ranking Minority Member, Senate Governmental Affairs Committee as important mission areas for the agency and generally reflect the outcomes for all of Education’s activities. The major management challenges confronting Education, including the governmentwide high-risk areas of strategic human capital management and information security, were identified by GAO in our January 2001 performance and accountability series and high risk update, and were identified by the Department of Education’s OIG in December 2000. We did not independently verify the information contained in the performance report and plan, although we did draw from other GAO work in assessing the validity, reliability, and timeliness of Education’s performance data. We conducted our review from April 2001 through June 2001 in accordance with generally accepted government auditing standards. On June 28, 2001, we obtained written comments on our draft report from the Deputy Secretary of Education. Education generally agreed with our conclusions and recommendations. The Deputy Secretary said that he and the Secretary share many of our concerns about the Department's strategic planning process and management challenges and that Education has taken steps to tackle these issues, including a top-to-bottom review of its strategic planning process and the formation of a team of senior staff to fix the department's management and fiscal accounting problems. In addition, the Deputy Secretary cited anticipated sweeping changes to America's schools and the Department resulting from the reauthorization of the Elementary and Secondary Education Act and other reauthorizations as the rationale for not including strategies for achieving its objectives in the interim fiscal year 2000 performance report. Education also provided oral technical comments on our draft report, which we incorporated when appropriate. Education's written comments are printed in appendix II. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; the Secretary of Education; and the Director, Office of Management and Budget. Copies will be made available to others on request. If you or your staff have any questions, please call me at (202) 512-7215. Key contributors to this report were David Alston, Jeff Appel, Kelsey Bright, Cheryl Driscoll, Joy Gambino, Eleanor Johnson, Gilly Martin, Joel Marus, and Glenn Nichols. The following table identifies the major management challenges confronting the Department of Education, which include the governmentwide high-risk areas of strategic human capital management and information security. The first column lists the challenges identified by our office and/or the Department of Education’s Office of Inspector General (OIG). The second column discusses what progress, as discussed in its fiscal year 2000 interim performance report, Education made in resolving its challenges. As mentioned in the body of this report, new to Education’s performance reporting this year was a section entitled: “Management Challenges: Successes and On-going Efforts.” We found this to be a helpful tool for tracking Education’s progress in addressing some of the management challenges. We found, either in the management challenges section or elsewhere in the report, that Education discussed the agency’s progress in resolving more than half of the identified challenges.
|
This report reviews the Department of Education's performance report for fiscal year 2000. Specifically, GAO examines Education's progress in achieving selected key outcomes that are important to its mission. Given the lack of performance data, explanations, and strategies to meet unmet goals in the future, it was difficult for GAO to assess progress. The lack of a performance plan also hindered GAO's efforts. Specifically, GAO found that it was difficult to assess Education's progress in achieving the six selected outcomes because of the lack of fiscal year 2000 data for many of its indicators. Consistent with its findings in reviewing Education's performance report from last year, GAO found that Education had no goals or measures for preventing fraud, waste, mismanagement, and error in the student financial assistance programs. Although the Office of Student Financial Assistance has established a target of being removed from GAO's high-risk list, there were no corresponding goals or measures in the department's interim report. However, Education has revised its strategic plan to incorporate an objective of ensuring financial integrity within the department. Like last year's report, GAO found that there was no discussion in the interim report on strategic human capital management.
|
The TAA for Workers program covers workers whose jobs have been threatened or lost due to changing trade patterns. While the specific services and benefits available through the program have changed over time, the primary forms of assistance that have been extended include income support and training. In order for workers to apply for TAA benefits, Labor must certify that their This certification process begins when separation was trade-affected.workers or their representatives file a petition with Labor on behalf of a group of laid-off workers. The agency then conducts fact-finding investigations to determine whether the workers’ jobs were adversely affected by international trade. In nearly all investigations, Labor contacts company officials to gather information on the circumstances of the layoff. This information is the basis for many petition decisions. As needed, Labor may also gather information by surveying the company’s customers or examining aggregate industry data. The TAA statute lays out certain basic requirements that all petitions must meet in order to be certified by Labor, including that a significant proportion of workers employed by a company be laid off or threatened with layoff. In addition, a petition must demonstrate that the layoff is related to international trade in one of several ways—for example, because the firm shifted production overseas or because increased imports competed with its products. By law, Labor is required to conclude its investigation and either certify or deny a petition within 40 days of receiving it. Once Labor reaches a decision on the investigation, it notifies the relevant state, which has responsibility for contacting the workers regarding Labor’s decision. If the workers are certified, the state informs the workers of the benefits available to them, and when and where to apply for benefits. If a petition is denied, a worker may challenge the decision through an appeals process. The 2009 legislation made substantial changes to the TAA program, including extending eligibility to workers in a greater variety of circumstances. For example, the law extended coverage to workers at firms that provide services—previously, eligibility was restricted to workers in firms producing goods. It also changed eligibility rules for other types of workers, such as those whose firms shifted production overseas, as shown in figure 1. To reflect this broadened eligibility, Labor more than doubled the number of categories by which it could certify a petition. The 2009 legislation also generally enhanced TAA benefit levels. The amount of funding available for training nationally more than doubled— from $220 million to $575 million for fiscal years 2009 and 2010. Further, the legislation increased either the amount or duration of many specific benefits and services, which are available to eligible workers covered by certified petitions filed between May 18, 2009, and February 14, 2011. Specifically, these enhanced benefits and services include: Extended deadline for enrollment. The 2009 legislation extended the deadline by which workers must enroll in or receive a waiver from training to be eligible to receive income-based support to the later of 26 weeks from the date of TAA certification or the date of separation from employment. Previously, the deadline for enrolling in training was the later of 8 weeks after TAA certification or 16 weeks after separation from employment. The deadline was extended in part to give laid-off workers more time to search for a job before deciding to enroll in training. Extended income support. Participants enrolled in full-time training who have exhausted their unemployment insurance may receive a continuation of income support equal to their final unemployment insurance benefit. The 2009 legislation provided that participants may receive up to 130 weeks of income support, up from 104 weeks under the prior law. For participants who require remedial or prerequisite courses, the maximum level of income support increased from 130 to 156 weeks. Income support was extended in part to enable workers to participate in longer training programs. Training. Under the 2009 program, participants have additional training opportunities beyond those that were available under the 2002 program. The 2009 legislation authorized training for workers threatened with a layoff that has not yet occurred in addition to workers who have been laid off. The law also authorized participants to attend training part-time, but limited eligibility for income support to workers in full-time training. Wage supplement. The 2009 legislation increased the income eligibility threshold and maximum wage supplement benefit for some older workers. TAA participants 50 years or older who secure a new, lower paying job than their previous trade-impacted job may be eligible to receive wage supplements. The 2009 legislation eliminated the requirement that such workers find employment within 26 weeks of being laid off. It also allowed older workers receiving the wage supplement to participate in full-time training if employed at least 20 hours per week. Workers employed on a full-time basis who were not enrolled in training maintained their eligibility for wage supplements. Job search and relocation allowances. The 2009 legislation increased the amount of job search and relocation expenses for which state workforce agencies could reimburse eligible participants.Specifically, the 2009 legislation provided that the lump sum of job search and relocation expenses would cover 100 percent (up from 90 percent) of the costs, to a maximum of $1,500 (up from $1,250). Health coverage benefit. The 2009 legislation increased the amount of the tax credit TAA participants could receive through the Health Coverage Tax Credit (HCTC) program from 65 percent to 80 percent of qualifying monthly health plan premiums. The Internal Revenue Service administers this program. The 2009 legislation also affected Labor’s operations by, for example, establishing a new Office of Trade Adjustment Assistance and requiring Labor to collect additional information on workers who receive TAA benefits and services, as well as data on service sector workers, including the service workers’ state, industry, and reason for certification. Although the changes made by the 2009 legislation were set to expire on December 31, 2010, Congress extended them through February 12, 2011. At that time, the TAA program reverted to provisions as authorized by the prior law, the Trade Adjustment Assistance Reform Act of 2002. Eight months later, in October 2011, Congress passed the Trade Adjustment Assistance Extension Act of 2011, which reinstated many of the program provisions established by the 2009 legislation, including eligibility for service sector workers. However, this most recent legislation also reduced some of the other benefits and services to the levels set by the 2002 program, such as scaling back the maximum number of weeks of income support from 130 to 104 for participants enrolled in basic training and lowering allowances for job search and relocation from $1,500 to $1,250. See appendix II for a detailed comparison of the 2002, 2009, and 2011 program provisions. In addition to changes in participant benefits and services, the 2009 legislation added requirements regarding the allocation of TAA training funds to the states. It required Labor to make an initial distribution of no more than 65 percent of available funds, holding 35 percent in reserve for additional distributions throughout the year, but ensuring a distribution of at least 90 percent of funds no later than July 15 of the fiscal year. The law specified a number of factors for Labor to take into account in making distributions to the states, including factors Labor might consider appropriate, and specified that a state’s initial distribution had to be at least 25 percent of the distribution it received in the preceding fiscal year. The 2009 legislation required that, to cover states’ administrative costs and employment and case management services, Labor distribute to each state an additional amount equal to 15 percent of its annual training allocation. States were required to use at least one-third of those administrative funds for case management and employment services. The 2009 legislation also required that each state be provided an additional $350,000 for case management and employment services. States have 3 years to expend these federal funds. As such, fiscal year 2009 funds had to be used by the end of fiscal year 2011. State and local workforce agencies play key roles in the petition certification process and help workers take advantage of the services and benefits available through the TAA program. The agencies assist workers and employers in filing petitions and can also file petitions on behalf of workers. After a petition is certified, the agencies contact employers to obtain a list of workers affected by the layoff and send each worker a letter notifying him or her of potential eligibility. The agencies may also hold orientation sessions to provide workers with detailed information on the TAA program and other services and benefits available. In addition, case managers provide vocational assessments and counseling to help workers enroll in the program and decide which services or benefits are most appropriate. Local case managers also refer workers to other programs, such as the Adult and Dislocated Worker Programs under the Workforce Investment Act, for additional services. Labor is responsible for monitoring the performance of the TAA program. Its primary reporting system, the Trade Activity Participant Report, is intended to track information on TAA activity for individuals from the point of TAA eligibility determination through post-participation outcomes.Prior to 2010, the TAA information was reported only on those who had exited the program, as required by Labor. Each quarter, states are required to submit data on participants who received TAA program services. These data include participant demographics; information on services and benefits received, such as case management and reemployment services; income support; and participant outcomes such as employment status and earnings after program exit. States primarily track these outcomes using the Unemployment Insurance wage records. Labor uses data submitted by states to report national outcomes on the TAA performance measures for each fiscal year. The 2009 legislation added a new requirement for states to report on all participants who are enrolled in the TAA program and not just those who exited the program, as required by Labor. As a result of this change, Labor revised its reporting system and required states to submit additional information to track individual benefits and services provided to participants under the new law. In addition, the 2009 legislation required states to report on program outcomes for a longer period after participants exit the program.core measures of program performance: entered employment rate, average earnings, and employment retention rate. For fiscal year 2012, Labor’s performance goals for the TAA program were 59 percent for entered employment, $13,248 for average earnings over a 6-month period, and 83.2 percent for employment retention. The TAA for Workers program is one of four trade adjustment assistance programs; the other three provide assistance to firms, farms, and communities. The Department of Commerce administers a TAA program that provides funds for manufacturing and other types of firms to develop and implement a business recovery plan. The Department of Agriculture administers the TAA for Farmers program, which provided help to individual producers of raw agricultural commodities, such as farmers and fishermen, to become more competitive in producing their current commodity or transitioning to a different commodity. Under a TAA program to assist trade-affected communities, Labor awards grants to institutions of higher education for expanding or improving education and career training programs for persons eligible for training under the TAA for Workers program, and the Department of Commerce provides technical assistance to trade-affected communities and awards and oversees strategic planning and implementation grants. In addition to mandating that GAO report on the TAA for Workers program, the 2009 Act mandated that GAO report on the other TAA programs as well. Our report on the Farmers program was issued in July 2012 and our reports related to the TAA programs that assist firms and communities were issued in September 2012. Labor took multiple steps to implement the 2009 legislation after it was enacted. For example, it set up the Office of Trade Adjustment Assistance established by the legislation, which took over administration of the TAA program from the Office of National Response. Also, as required by the legislation, Labor issued a regulation implementing the new requirements for the distribution of training funds to states. Agency officials told us that they also drafted a regulation on investigation standards, as required, but did not publish the regulation because by the time it was ready for publication, the 2009 provisions were set to expire.with the legislation, the agency updated its information technology system to collect data on service sector workers and implemented a new reporting system for states to collect data on participant activities and outcomes. Labor also took implementation steps beyond those specifically required by law, such as providing training and technical Also, in accordance assistance to state workforce agencies and issuing revised guidance on program operations. According to the state officials we interviewed, this assistance was generally both helpful and timely. Labor’s primary implementation challenge after the 2009 legislation was addressing a substantial increase in its workload to process petitions. As depicted in figure 2, the number of petitions the agency received in the third quarter of fiscal year 2009, when the law took effect in May 2009, was more than triple the number received the previous quarter. Multiple factors contributed to this increase. According to agency officials, the increase in petitions was caused by the 2009 legislation’s expansion of eligibility to new categories of workers as well as the economic recession, which may have increased trade-related layoffs. Another cause for the spike in petitions is that in the months before the law took effect, Labor allowed petitioners to withdraw and then resubmit petitions after the 2009 legislation took effect, so they could take advantage of the new, enhanced benefit levels. As a result, an agency official estimated that roughly 500 petitions were withdrawn before May 18, 2009, and then resubmitted after the law took effect. According to Labor officials, the 2009 legislation generally made it more challenging to determine TAA eligibility. As described earlier, the law expanded the number of categories for which petitions could be certified. Agency officials told us that this expansion complicated investigators’ efforts because petitions needed to be evaluated against a greater number of eligibility criteria than before. Further, some of the new categories presented additional challenges. According to Labor officials, the firms identified in service-related petitions tended to be more dispersed geographically than manufacturing firms, making it more difficult to evaluate certain service-related petitions. For example, in cases where the work that was shifted abroad was performed by workers in multiple locations, it may be difficult to determine exactly which workers had been affected. In addition, some officials said that investigating petitions in which workers produce finished articles that contain foreign components, such as tubes used in televisions, proved challenging. Labor said these petitions often require contact with foreign firms, which can present communication challenges—for example, due to differences in currencies and time zones. Further, they noted the absence of any legal requirement for foreign companies to comply with Labor’s data requests. In contrast to these challenges, Labor officials told us that the 2009 legislation made some investigations easier. Previously, TAA eligibility standards were different for nations that did and did not have a free trade agreement or preferential trade relationship with the United States. The 2009 legislation eliminated this difference, making it more straightforward to investigate shifts in production. Labor initially had insufficient capacity to handle its increased workload, and thus, lagged in processing petitions. As described previously, Labor is required to process a petition—that is, determine whether to certify or deny it—within 40 days. The quarter after the 2009 legislation took effect, on average, Labor took 153 days to process a petition—nearly four times as long as the statutory limit (see fig. 3). Multiple factors contributed to the lag, including an increased volume of petitions, initial staff shortages and turnover, and the need for staff to become familiar with the new provisions of the 2009 legislation. An official noted that initially, hiring proved challenging because the 2009 legislation did not authorize funds for implementation. As a result, Labor paid for new hires through the agency’s general management funds. Most new staff members were hired in July 2009, approximately 2 months after the law took effect. During our review of TAA data and petition case files, we discovered that Labor mislabeled the basis for several certifications in its records, suggesting that data reported to Congress may contain inaccuracies. These errors were likely caused by the high volume of petitions that required processing, staff shortages and turnover, and gaps in internal controls. Moreover, as described earlier, the number of categories by which petitions could be certified more than doubled after the 2009 legislation. Labor told us that investigators’ unfamiliarity with these new categories may have also contributed to errors. In one instance, Labor certified a petition based on imports of goods, but the staff member who entered this information into the information technology system inaccurately recorded the eligibility category as imports of services. In another case, Labor officials acknowledged that a certification based on imports of goods was improperly documented as imports of services in the petition case file itself. Among other gaps in internal controls, we found that a single staff member was responsible for recording the reason for each certification in Labor’s information technology system. The errors we found do not necessarily indicate that petitions were wrongly determined, and we did not examine whether any individual determinations were correct. However, the errors indicate that some petitions were mislabeled after they were certified. Agency officials acknowledged that both types of errors occurred and told us that they were most likely to occur in petitions filed during the first year of the 2009 program, a period in which approximately 4,000 petitions were processed. Labor took steps to address its implementation challenges, including roughly doubling its staff. In the months after the 2009 legislation took effect in May 2009, Labor hired approximately 30 new staff, some on a permanent basis and others as temporary hires. Although most new staff members were hired in July 2009, agency officials estimate that it takes approximately 6 months to fully train a new investigator. As a result, officials said the Office of Trade Adjustment Assistance reached its peak operating capacity in January 2010, approximately 8 months after the 2009 legislation took effect. Labor’s efforts to increase staff were hampered by frequent employee turnover. According to Labor officials, many staff hired on a temporary basis left the agency when they found permanent positions elsewhere, diminishing Labor’s overall capacity to process petitions. In tandem with its efforts to increase capacity, Labor took steps to enhance its internal controls by adding quality controls to its petition investigation process. As shown in figure 4, Labor incorporated these controls over approximately 2 years. In December 2009, for example, Labor began requiring a senior investigator to review each petition case file before and after the determination was reached to ensure the file included appropriate documentation. Previously, petitions were subject to a single review by the certifying officer. Second, in May 2010, the agency created a checklist that specified standard operating procedures and accuracy checks for investigations. The final version of this checklist, established in the spring of 2011, has specific targets for data entry accuracy, timeliness of investigations, customer outreach, and more. Finally, in the fall of 2011, Labor began quarterly tests to gauge how often these targets were reached. In the first quarter that tests were conducted, Labor told us that investigators met the quality control targets 87 percent of the time on average, slightly below the agency’s internal goal of 90 percent. It took some time for the benefits of increased staffing and improved quality controls to take effect. By the end of fiscal year 2010, petition processing times had fallen substantially, although by this time the number of petitions Labor received had declined. Moreover, in June 2012, we reviewed Labor’s petition investigation process and found that it generally conformed to best practices for internal controls. Further, in September 2012, Labor conducted an internal audit to determine how often the basis for a certification was improperly recorded in either the petition case file or the agency’s information technology system. This review covered the period from May 18, 2009, until May 31, 2010, when Labor introduced additional quality control steps. Through an audit of 351 randomly selected petitions, Labor estimated the error rate to be 1.4 percent, with a margin of error of plus or minus 5 percent. According to Labor, this audit suggests that errors were more likely to be present in the information technology system than in the petition case file itself. Labor concluded that this low percentage of error had a minimal impact on the petition data reported in its 2010 annual report to Congress. Labor said it has corrected all errors found in its audit findings, and as part of new quality control procedures, has established a more frequent internal audit system that will identify and correct such errors throughout each quarterly reporting cycle. Participants benefited from nearly all of the 2009 legislative changes, some of which also helped administrators better serve the participants, according to the state officials we interviewed. For example, the expanded eligibility for workers, such as for those in the service sector, benefited participants by providing access to program benefits for trade- affected workers under a wider array of circumstances, such as call center employees whose jobs were moved overseas. Figures 5 and 6 summarize the views of officials in the six states we examined. Both participants and administrators benefited from a simplified and extended training enrollment deadline—which must be met to qualify for TAA-based income support—according to officials from all six states. Previously, eligible workers had to enroll in training within 8 weeks of their petition’s certification or 16 weeks of their separation, whichever was later. The 2009 legislation extended the training enrollment deadline to 26 weeks after the later of certification or separation. An official from one state told us that the new extended deadline was easier for eligible workers to understand since the period of time within which individuals had to enroll in training was the same, regardless of whether that period began at the date of separation or certification. According to several officials we interviewed, the extended deadline allowed participants to more fully consider their employment and training options, and therefore facilitated better decision making. The longer enrollment period also positively affected administrators. Some state officials noted that the extension provided case managers with more time to assess participants’ skills and abilities and advise them on employment and training options. In addition to extending time frames for participants, the 2009 legislation provided dedicated funding to states for case management and employment services, which indirectly benefited participants, according to several state officials. Previously, states did not receive funds for case management and employment services, and so resources from other programs were often used to support TAA participants. Several state officials said that dedication of these funds allowed case managers to better serve participants. Generally, these funds were used to pay the In some cases, this built capacity, salaries of TAA case managers., such as when the funds were used to hire new TAA staff who provided these services. In other cases, the TAA funds replaced funding from other sources, for example, when services were provided through the Workforce Investment Act, according to several state officials. Officials from several states said that the dedication of these funds reduced the financial burden the TAA program had previously placed on other workforce programs. Under a rule Labor published on April 12, 2010, states were required, no later than February 12, 2011, to use state government employees covered by a merit system of personnel administration to perform TAA funded functions undertaken to carry out TAA provisions. 75 Fed. Reg. 16, 988 (April l 2, 2010) (codified at 20 C.F.R. §618.890). (The Omnibus Trade Act of 2010 extended the initial regulatory deadline of December 15, 2010, to February 12, 2011. Pub. L. No. 111-344, §102, 124 Stat. 3614.) As a result, officials from one state told us that they calculated exactly how many TAA staff they could support with the TAA funding and then used funds from other sources to pay non-merit staff providing case management. waive deadlines for TAA-based income support and enrollment in training. Similarly, it also provided an exception to the training enrollment deadline in cases where an eligible worker missed the deadline because he or she was not given timely notification of the deadlines. Both participants and administrators benefited from these changes, according to the officials from five of the six states we interviewed. Officials from one of these states told us that the waivers reduced the administrative burden of processing appeals from eligible workers who missed the enrollment deadline. Further, several of the changes made by the 2009 legislation benefited participants who enrolled in training, according to most of the officials we interviewed, including: an increase in the amount of training funds available. the possibility of receiving income support for longer than previously available; the option to start training while threatened with job loss (prior to actually losing their jobs); the flexibility to attend training on a part-time basis; and According to several officials, the additional 26 weeks of potential income support while in training allowed program participants to consider longer- term training options, such as health care, a high-demand profession. In addition, officials said that since participants often drop out of training after income support expires, this change bolstered training program completion. Some officials also said that in some cases, the flexibility to attend training part-time may have contributed to higher training completion rates. For example, some full-time training participants who gained employment before their training program ended opted to finish their training part-time. Officials said that without the part-time option, such participants would have likely dropped out of training altogether. Officials from five states said the shift allowing part-time training had a neutral effect on administration. However, officials from one state attributed their state’s relatively low part-time enrollment rates to the requirement that TAA-based income is contingent upon full-time enrollment in training. Moreover, according to state officials, the increase in available training funds from $220 million to $575 million per fiscal year benefited participants. In one state, officials said that having access to additional funds increased its statewide caps on training program costs, which allowed them to keep pace with higher education institutions’ rising tuitions. Officials in another state said that receiving these additional funds allowed them to train all eligible participants rather than putting some on waiting lists for training. Further, a few state officials noted that the increased funds for training enabled them to serve an increased volume of participants. As shown in figure 7, five of the selected six states expended all of their fiscal year 2009 training funds—the only 3-year spending period that has expired. Thus far, these states have drawn down, on average, 76 percent of the training funds allocated to them for fiscal year 2010.expenditures. TAA provides participants with a variety of benefits and services—some were used more than others. As of September 30, 2011, 107,896 participants received services under the 2009 TAA program. As shown in figure 9, the majority of these participants were male and most were white. Nearly half the participants were age 50 or older and nearly two- thirds had a high school education or less. All 107,896 participants who received services under the 2009 TAA program received case management and employment services and nearly half enrolled in training. Most of the participants who enrolled in training had only one training activity, but some enrolled in two or three training activities (see fig.10). Participants can receive different types of training, but occupational skills training—training in specific occupations typically provided in a classroom setting—was the most common type of training provided (see fig. 11). In addition to occupational training, participants received other types of training, such as remedial training, which includes adult basic education and English as a Second Language. These types of training were provided less frequently than occupational training. Participants in the 2009 TAA program received training in a variety of occupational fields, most commonly related to computers, health, and production occupations (see table 1). As of the end of fiscal year 2011, approximately half of the 2009 program participants who had enrolled in training were still in a training activity. For those 24,568 participants who completed or withdrew from training, the average amount of time spent in training was approximately 43 weeks. As shown in figure 12, nearly one-third of these participants spent between a half year and a full year in training. While approximately 50,000 participants enrolled in training under the 2009 program, fewer participants took advantage of several benefits and services that were added to, or expanded under, the 2009 program. For example, the 2009 legislation added part-time training and pre-layoff training for adversely affected incumbent workers. The legislation also increased the job search and relocation allowances and modified the program providing wage supplements for older workers. As shown in table 2, fewer than 8 percent of the participants who received benefits under the 2009 program used each of these benefits. The wage supplement for older workers and the job search and relocation allowances are benefits that have not been widely utilized in the past. For example, we previously reported that fewer than 3,500 workers had utilized this benefit each year between 2004 and 2006. Similarly, not many participants have typically received job search and relocation allowances. For example, the Congressional Research Service reported that fewer than 500 workers received job search allowances each year between fiscal years 2006 and 2008, while fewer than 800 received relocation allowances during those years. While not used extensively, about 13 percent of the 5,521 older workers who participated in the wage supplement program also enrolled in training—a benefit available to eligible older workers participating in the 2009 program. Under the 2002 program, workers who participated in the wage supplement program for older workers were not eligible to receive training. Based on our prior work on HCTC, we found that participation in the program initially increased after the 2009 legislation. HCTC was another benefit that was enhanced under the 2009 legislation, increasing the tax credit covering monthly health insurance premiums from 65 percent to 80 percent. Because HCTC is administered by the IRS, Labor does not collect information on how many TAA participants used this benefit. However, we reported in 2010 that during the 6 months after key changes in the 2009 legislation took effect, the average monthly participation rate for TAA individuals was about 10,000. This represented an increase in participation compared to the 6 months prior to the passage of the legislation. Under the 2009 TAA program, participants could receive up to 130 weeks of income support, plus an additional 26 weeks if they are also enrolled in remedial or prerequisite education. In total, the number of weeks for which participants could receive income support increased by 26 weeks. cases.expire, the number of participants receiving TAA income support and the average duration of TAA income support may increase substantially. Little is yet known about the outcomes achieved by participants in the 2009 program largely because nearly two-thirds of the participants were still enrolled in the program as of September 30, 2011. States are not required to begin tracking employment outcomes for participants until they exit the program. Of the 107,896 participants enrolled in the 2009 program, approximately 66 percent had not exited the program as of September 30, 2011. The one-third of participants who exited the program spent an average of about 37 weeks in the program. Approximately 76 percent of those exiting were in the program for 1 year or less (see fig.13). In addition, little is known about participants’ outcomes because the information needed to assess these outcomes was not yet available. For the approximately 36,000 participants who had exited the program as of September 30, 2011, information to calculate entered employment rates, employment retention rates, and average earnings was not yet available for many participants. For example, the entered employment rate is based on the number of participants who were employed 6 months after exiting the program. Yet, as of September 30, 2011, states had reported the 6-month employment status on only about 60 percent of the participants who had exited the program. Similarly, states reported the earnings information needed to calculate the average earnings performance measure for only about a third of the approximately 13,000 participants who would have been included in the calculation.result, few of the participants in the 2009 program would have been included in calculating TAA performance outcomes through fiscal year 2011 (see fig. 14). Incomplete outcome data for TAA participants is a longstanding issue. The primary data source for outcome information is Unemployment Insurance wage records. As we have previously reported, these wage records provide a common yardstick for assessing performance across states but suffer from time delays. We reported on these delays in 2006, noting that most of the outcome data reported in a given program year actually reflect participants who left the program up to 2 years earlier. Another factor contributing to the unavailability of outcome data for the 2009 program participants at the time we analyzed the data is that the 2009 legislation required states to report on job retention and earnings for a year after the participant exits the program—an additional 3 months beyond what states had previously reported. Labor officials stated that they were aware of the lack of outcome information being reported and are requiring states to submit updated outcome information by September 2012. Even when employment and earnings information becomes available, more information will be needed to assess the effectiveness of the changes made by the 2009 legislation. First, Labor uses information on employment rates and earnings to compare the TAA program to national program goals, but the information is reported on a fiscal year basis and combines data for participants under the 2002 and 2009 programs. Therefore, these reports will not provide a complete or separate picture of outcomes for 2009 program participants. However, Labor officials stated that their annual report for fiscal year 2012 would primarily consist of 2009 participants. Second, a program’s effectiveness cannot be determined solely by outcomes because they cannot show whether an outcome is a direct result of program participation or whether it is a result of other influences, such as the state of the local economy. Labor officials told us they have no plans to conduct an impact evaluation of the 2009 program since the program is no longer in effect. However, Labor is conducting a 5-year evaluation study of the 2002 TAA program, which is expected to be completed by November 2012. The study will address the operation and impacts of the program after the passage of the Trade Adjustment Assistance Reform Act of 2002 and will include an impact study on participants’ employment-related outcomes, overall and for key worker subgroups, and a benefit-cost analysis. The 2009 TAA legislation made extensive changes to the TAA for Workers program benefitting program participants—training funds were more than doubled, new benefits were added, eligibility was broadened, and existing benefits were enhanced. This contributed to a substantial increase in the number of petitions immediately following implementation of the changes in May 2009. Yet, when confronted with the initial surge in petition volume and faced with pressure to process these petitions quickly, Labor made some errors in recording the reasons why petitions were certified. Since that time, Labor has enhanced its quality controls for investigating petitions and determined that the data errors we found were not widespread. In addition, because most participants were still enrolled in the program at the time of our review, sufficient information was not available to determine whether the program changes contributed to better performance outcomes. However, even when outcome data become available, it will be very difficult to isolate the effect of the 2009 legislative changes because the results cannot differentiate program participation from other outside factors, including the overall state of the economy. While Labor plans to release the results of its 5-year evaluation study of the 2002 program later this year, it will not include a definitive determination of the effectiveness of the substantial changes made by the 2009 legislation. Further, the TAA program was modified again in October 2011, further complicating any future evaluation of the 2009 program. We provided officials from the Department of Labor a draft of this report for review and comment. Labor provided written comments, which are reproduced in appendix IV, as well as technical comments, which we incorporated as appropriate. In its written comments, Labor generally agreed with our findings. Labor noted that the report validated its efforts to improve employment and retention outcomes for trade-affected workers, made possible by the expansion of benefits and services under the 2009 TAA program. We will send copies of this report to the Secretary of Labor, relevant congressional committees, and other interested parties and will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. A list of related GAO products is included at the end of this report. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and staff acknowledgments are listed in appendix V. Our objectives were to determine: (1) what challenges Labor faced in implementing the 2009 legislation, (2) the effect selected state government officials say the 2009 legislative changes had on participants and on state and local administrators, and (3) the extent participants received TAA benefits and services as established by the 2009 legislation and what is known about employment outcomes. To address these objectives, we reviewed relevant federal legislation, regulations, and departmental guidance and procedures. We also interviewed Labor officials and state government officials in six states—Massachusetts, Michigan, North Carolina, Oregon, Pennsylvania, and Texas. We also interviewed selected local government officials in three of these states (Michigan, North Carolina, and Oregon). We obtained and reviewed Labor data on petitions, training fund expenditures, and participant activities. We conducted this performance audit from May 2011 through September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We selected the six specific states because they had a high fiscal year 2010 training fund allocation, a high volume of TAA certifications, and geographic diversity (see table 3). We also spoke with select local officials in three states (see table 4). Through these interviews, we obtained state and local officials’ opinions on what effects key changes made by the 2009 legislation had on their administration of the program and on participants. We analyzed Labor’s data on petitions filed from fiscal years 2007 to 2011. We assessed the reliability of key data by interviewing Labor officials knowledgeable about the data, reviewing related documentation, manually and electronically testing the data, and assessing internal controls at Labor. During our manual testing of this petition data, we discovered Labor made errors in recording the reasons why several petitions were certified, although the results of our review are not generalizable. We brought this issue to the attention of Labor officials. Because we did not know the extent of these errors during the period of our review, we did not include information on certification categories in this report. In late September 2012, Labor provided us with the results of its internal audit, which indicated that this data was reliable. Moreover, we determined that information regarding the number of petitions filed, the dates petitions were received by Labor, and the dates Labor issued determinations were sufficiently reliable for the purposes of this report. We also assessed what internal controls were present in Labor’s petition investigation process as of June 2012. We compared Labor’s written procedures with GAO-published standards for internal controls and conducted an onsite review of seven petitions to assess whether Labor followed its written procedures when conducting investigations. We selected petitions filed from May 2009 to February 2011. They are nongeneralizeable and used only for illustrative purposes. The selected petitions were diverse with respect to the month/year petitions were received; whether petitions were certified or denied; whether petitions represented manufacturing or service sector workers; and other factors, such as the reason for the layoff (i.e., a shift in production overseas versus an increase in imports). We analyzed Labor’s data on TAA training fund expenditures for fiscal years 2009 through 2011, with data current through the second quarter of fiscal year 2011 (March 31, 2011). This data included expenditures by state for training, administration (inclusive of employment/case management), job search and relocation, income support, and the wage supplement program for older workers. We assessed the reliability of these data by electronically testing for errors and by interviewing knowledgeable agency officials. Further, we compared these expenditure data with fund allocation data published in Labor’s annual reports to Congress. Overall, we found that the data were sufficiently reliable for the purposes of this report. We analyzed Labor’s participant data file containing data elements on characteristics, activities, and outcomes for TAA participants. We conducted our analyses on those participants who were covered by petitions filed between May 18, 2009 and February 14, 2011—the dates covered by the 2009 legislative changes. We assessed the reliability of these data by interviewing Labor officials about the internal controls in place to assure the quality of data reported by states and reviewed the edit checks Labor established to identify inconsistencies and data errors. We also performed electronic testing of individual data elements to remove duplicate entries and ensure that the data being entered were consistent with instructions provided by Labor to the states. We determined that information related to participant characteristics and activities was sufficiently reliable to be used in the report. However, our testing of the outcome data surfaced issues with information being reported on employment status and earnings for participants who had exited the program. Specifically, we found that the employment status and earnings information for many participants who had exited the program was not identified. We believe that reporting outcomes would be misleading when two-thirds of the participants in the 2009 program were still enrolled as of September 30, 2011, and outcome information for many participants who had exited the program was not yet available. As a result, we did not include entered employment rates, employment retention rates, and average earnings in this report. Waivers may be issued because the worker: 1. Cannot participate in training due to a health condition 2. Enrollment date is not 3. Available only for workers earning less than $50,000 per year in reemployment Maximum benefit of $10,000 Requires full-time employment within 26 over a period of up to 2 years (104 weeks) weeks of separation Available only for workers earning less than $50,000 per year in reemployment Workers may participate in TAA-approved training and receive employment and case management services Allows for part-time employment if enrolled in training Eliminates deadline for reemployment Available only for workers earning less than $55,000 per year in reemployment Maximum benefit of $12,000 over a period of up to 2 years (104 weeks) The deadline to submit a report to Senate Finance and House Ways and Means Committees was extended to February 15 The 2011 legislation required Labor, with regard to petitions filed between February 13, 2011, and October 21, 2011, to consider petitions and automatically reconsider denied petitions using the 2011 eligibility provisions. Although the Omnibus Trade Act of 2010 extended the effective date of the expiration of the 2009 amendments to February 12, 2011, Labor interpreted this to mean petitions filed on or before 11:59 PM EST on Monday, February 14, 2011, the next business day after February 12, which was a Saturday. Suppliers produce and supply component parts directly to other firms, which produced articles that were the basis for a TAA certification. Downstream producers perform additional, value-added production processes for firms producing articles that were the basis for a TAA certification. If a worker’s firm is a supplier, and component parts it supplies to the primary firm accounted for at least 20 percent of production or sales of the worker’s firm, then the loss of business from the primary firm by the worker’s firm is not required to have contributed importantly to the separation or threatened separation. See third statement in table note c. The training fund amount was $143,750,000 for October 1, 2010 to December 31, 2010. The training fund amount will be $143,750,000 for October 1, 2013 to December 31, 2013. In addition to the contacts named above, Laura Heald, Assistant Director; Kathryn O’Dea, Ellen Ramachandran, and Wayne Sylvia made key contributions to this report. Also contributing to this report were James Bennett, Jessica Botsford, Susannah Compton, Daniel Concepcion, Kathy Leslie, Jean McSween, and Vanessa Taylor. Trade Adjustment Assistance: States Have Fewer Training Funds Available than Labor Estimates When Both Expenditures and Obligations Are Considered. GAO-08-165. Washington, D.C.: November 2, 2007. Trade Adjustment Assistance: Industry Certification Would Likely Make More Workers Eligible, but Design and Implementation Challenges Exist. GAO-07-919. Washington, D.C.: June 29, 2007. Trade Adjustment Assistance: Changes Needed to Improve States’ Ability to Provide Benefits and Services to Trade-Affected Workers. GAO-07-995T. Washington, D.C.: June 14, 2007. Trade Adjustment Assistance: Program Provides an Array of Benefits and Services to Trade-Affected Workers. GAO-07-994T. Washington, D.C.: June 14, 2007. Trade Adjustment Assistance: Changes to Funding Allocation and Eligibility Requirements Could Enhance States’ Ability to Provide Benefits and Services. GAO-07-701, GAO-07-702. Washington, D.C.: May 31, 2007. Trade Adjustment Assistance: Labor Should Take Action to Ensure Performance Data Are Complete, Accurate, and Accessible. GAO-06-496. Washington, D.C.: April, 25, 2006. Trade Adjustment Assistance: Most Workers in Five Layoffs Received Services, but Better Outreach Needed on New Benefits. GAO-06-43. Washington, D.C.: January 31, 2006. Trade Adjustment Assistance: Reforms Have Accelerated Training Enrollment, but Implementation Challenges Remain. GAO-04-1012. Washington, D.C.: September 22, 2004.
|
While international trade has benefited Americans in a number of ways, it has also contributed to layoffs in a range of industries. To assist trade-displaced workers, Labor administers the TAA for Workers program, which provides income support, job training, and other benefits. The Trade Globalization and Adjustment Assistance Act of 2009, enacted as part of the American Recovery and Reinvestment Act, made substantial changes to the TAA program, such as extending eligibility to workers in the service sector and increasing benefits levels. The Act also required GAO to report on the operation and effectiveness of those changes. Specifically, GAO examined (1) the challenges Labor faced in implementing the 2009 legislation, (2) selected state officials' assessment of the 2009 legislation's effect on participants and state and local administrators, and (3) the extent to which participants received program benefits and services established by the 2009 legislation and achieved employment outcomes. GAO interviewed officials at Labor and in six states, selected for having a high level of TAA activity and geographic diversity. GAO also reviewed Labor's internal controls for investigating petitions, which are filed on behalf of workers and are the starting point for determining their TAA eligibility. GAO analyzed participant data on specific benefits and services received and employment outcomes, as available. The Department of Labor (Labor) was challenged to process the substantial increase in petitions filed for the Trade Adjustment Assistance (TAA) for Workers program after related legislation was enacted in 2009. Labor initially had insufficient capacity to handle this increased workload, leading to processing delays and data recording errors. For example, in the quarter after the 2009 legislation took effect, Labor took an average of 153 days to process a petition—nearly four times the statutory limit. Labor responded with corrective action, including hiring new staff and adding additional quality control steps for processing petitions. Partly as a result of these efforts, processing times fell substantially. Moreover, GAO found that Labor's petition investigation process, as of June 2012, generally conformed to best practices for internal controls. According to selected state officials, virtually all of the 2009 changes benefited participants, and some also helped administrators serve participants. Officials in all six states GAO interviewed expressed the view that both participants and administrators benefited from the simplified and extended training enrollment deadline. Some officials said the new deadline was easier for eligible workers to understand and provided administrators with more time to advise participants on their training and employment options. Moreover, officials said participants who enrolled in training benefited from other program changes, including increased training funds, the option to attend training part-time, and a longer period for income support. Some state officials said that the additional weeks of income support allowed participants to consider longer-term training options, such as health care programs. Over 107,000 participants received benefits and services as established by the 2009 law, but little is yet known about their employment outcomes. Nationally, all the participants received case management and reemployment services and about half enrolled in training, most commonly occupational skills training. Less than 8 percent of participants used other benefits. Little is known about employment outcomes because nearly two-thirds of the participants were still enrolled as of September 30, 2011, and employment and earnings information was often not available for those who had exited the program. While this information will eventually be available, other factors, including the overall state of the economy, affect these outcomes so isolating the effects of the 2009 legislative changes would be difficult. GAO is not making recommendations in this report. Labor generally agreed with the report's findings.
|
We have previously reported about the challenges of protecting the U.S. civil aviation system from terrorists’ attacks, the potential extent of terrorists’ motivation and capabilities, and the attractiveness of aviation as a target for terrorists. Until the early 1990s, the threat of terrorism was considered far greater overseas than in the United States. However, the threat of international terrorism within the United States has increased. Events such as the World Trade Center bombing have revealed that the terrorists’ threat in the United States is more serious and extensive than previously believed. Terrorists’ activities are continually evolving and present unique challenges to FAA and law enforcement agencies. We reported in March 1996 that the bombing of Philippine Airlines flight 434 in December 1994 illustrated the potential extent of terrorists’ motivation and capabilities as well as the attractiveness of aviation as a target for terrorists. According to information that was accidentally uncovered in January 1995, this bombing was a rehearsal for multiple attacks on specific U.S. flights in Asia. Even though FAA has increased security procedures as the threat has increased, the domestic and international aviation system continues to have numerous vulnerabilities. According to information provided by the intelligence community, FAA makes judgments about the threat and decides which procedures would best address the threat. The airlines and airports are responsible for implementing the procedures and paying for them. For example, the airlines are responsible for screening passengers and property, and the airports are responsible for the security of the airport environment. FAA and the aviation community rely on a multifaceted approach that includes information from various intelligence and law enforcement agencies, contingency plans to meet a variety of threat levels, and the use of screening equipment, such as conventional X-ray devices and metal detectors. For flights within the United States, basic security measures include the use of walk-through metal detectors for passengers and X-ray screening of carry-on baggage—measures that were primarily designed to avert hijackings during the 1970s and 1980s, as opposed to the more current threat of attacks by terrorists that involve explosive devices. These measures are augmented by additional procedures that are based on an assessment of risk. Among these procedures are passenger profiling and passenger-bag matching. Because the threat of terrorism had previously been considered greater overseas, FAA mandated more stringent security measures for international flights. Currently, for all international flights, FAA requires U.S. carriers, at a minimum, to implement the International Civil Aviation Organization’s standards that include the inspection of carry-on bags and passenger-bag matching. FAA also requires additional, more stringent measures—including interviewing passengers that meet certain criteria, screening every checked bag, and screening carry-on baggage—at all airports in Europe and the Middle East and many airports elsewhere. In the aftermath of the 1988 bombing of Pan Am flight 103, a Presidential Commission on Aviation Security and Terrorism was established to examine the nation’s aviation security system. This commission reported that the system was seriously flawed and failed to provide the flying public with adequate protection. FAA’s security reviews, audits prepared by the Department of Transportation’s Office of the Inspector General, and work we have conducted show that the system continues to be flawed. Providing effective security is a complex problem because of the size of the U.S. aviation system, the differences among airlines and airports, and the unpredictable nature of terrorism. In our previous reports and testimonies on aviation security, we highlighted a number of vulnerabilities in the overall security system, such as checked and carry-on baggage, mail, and cargo. We also raised concerns about unauthorized individuals gaining access to critical parts of an airport and the potential use of sophisticated weapons, such as surface-to-air missiles, against commercial aircraft. According to FAA officials, more recent concerns include smuggling bombs aboard aircraft in carry-on bags and on passengers themselves. Specific information on the vulnerabilities of the nation’s aviation security system is classified and cannot be detailed here, but we can provide you with unclassified information. Nearly every major aspect of the system—ranging from the screening of passengers, checked and carry-on baggage, mail, and cargo as well as access to secured areas within airports and aircraft—has weaknesses that terrorists could exploit. FAA believes that the greatest threat to aviation is explosives placed in checked baggage. For those bags that are screened, we reported in March 1996 that conventional X-ray screening systems (comprising the machine and operator who interprets the image on the X-ray screen) have performance limitations and offer little protection against a moderately sophisticated explosive device. In our August 1996 classified report, we provided details on the detection rates of current systems as measured by numerous FAA tests that have been conducted over the last several years. In 1993, the Department of Transportation’s Office of the Inspector General also reported weaknesses in security measures dealing with (1) access to restricted airport areas by unauthorized persons and (2) carry-on baggage. A follow-on review in 1996 indicated that these weaknesses continue to persist and have not significantly improved. New explosives detection technology will play an important part in improving security, but it is not a panacea. In response to the Aviation Security Improvement Act of 1990, FAA accelerated its efforts to develop explosives detection technology. A number of devices are now commercially available to address some vulnerabilities. Since fiscal year 1991, FAA has invested over $150 million in developing technologies specifically designed to detect concealed explosives. (See table 1.) FAA relies primarily on contracts and grants with private companies and research institutions to develop these technologies and engages in some limited in-house research. The act specifically directed FAA to develop and deploy explosives detection systems by November 1993. However, this goal has not been met. Since fiscal year 1991, these expenditures have funded approximately 85 projects for developing new explosives detection technology. Currently, FAA has 40 active development projects. Of these, 19 projects are developing explosives detection prototype systems. The remaining 21 projects are conducting basic research or developing components for use in explosives detection systems. In September 1993, FAA published a certification standard that explosives detection systems for checked bags must meet before they are deployed. The standard is classified and sets certain minimum performance criteria.To minimize human error, the standard also requires that the devices automatically sound an alarm when explosives are suspected; this feature is in contrast to currently used conventional X-ray devices, whereby the operator has to look at the X-ray screen for each bag to determine whether it contains a threat. In 1994, we reported that FAA had made little progress in meeting the law’s requirement for deploying explosives detection systems because of technical problems, such as slow baggage processing. As of today, one system has passed FAA’s certification standard and is being operationally tested by U.S. airlines at two U.S. airports and one foreign location. Explosives detection devices can substantially improve the airlines’ ability to detect concealed explosives before they are brought aboard aircraft. While most of these technologies are still in development, a number of devices are now commercially available. However, none of the commercially available devices are without limitations. On the basis of our analysis, we have three overall observations on detection technologies. First, these devices vary in their ability to detect the types, quantities, and shapes of explosives. Second, explosives detection devices typically produce a number of false alarms that must be resolved either by human intervention or technical means. These false alarms occur because the devices use various technologies to identify characteristics, such as shapes, densities, and other properties, to indicate a potential explosive. Given the huge volume of passengers, bags, and cargo processed by the average major U.S. airport, even relatively modest false alarm rates could cause several hundreds, even thousands, of items per day to need additional scrutiny. Third, and most important, these devices ultimately depend upon human beings to resolve alarms. This activity can range from closer inspection of a computer image and a judgment call to a hand search of the item in question. The ultimate detection of explosives depends on extra steps being taken by security personnel—or their arriving at the correct judgment—to determine whether an explosive is present. Because many of the devices’ alarms signify only the potential for explosives being present, the true detection of explosives requires human intervention. The higher the false alarm rate, the greater is the system’s need to rely on human judgment. As we noted in our previous reports, this reliance could be a weak link in the explosives detection process. In addition, relying on human judgments has implications for the selection and training of operators for new equipment. Despite the limitations of the currently available technology, some countries have already deployed some explosives detection equipment because of differences in their perception of the threat and their approaches to counter the threat. The Gore Commission recommends that $161 million in federal funds be used to deploy some of these devices. It has also recommended that decisions about deploying equipment be based on vulnerability assessments of the nation’s 450 largest airports. A number of explosives detection devices are currently available or under development to determine whether explosives are present in checked and carry-on baggage or on passengers, but they are costly. FAA is still developing systems to screen cargo and mail at airports. Four explosives detection devices with automatic alarms are commercially available for checked bags, but only one has met FAA’s certification standard—the CTX-5000. FAA’s preliminary estimates are that the one-time acquisition and installation costs of the certified system for the 75 busiest airports in the United States could range from $400 million to $2.2 billion, depending on the number of machines installed. These estimates do not include operating costs. The four devices rely on three different technologies. The CTX-5000 is a computerized tomography device, which is based on advances made in the medical field. It has the best overall detection ability but is relatively slow in processing bags and has the highest price. To meet FAA’s standard for processing bags, two devices are required, which would cost approximately $2 million for a screening station. This system was certified by FAA in December 1994. Two other advanced X-ray devices have lower detection capability but are faster at processing baggage and cheaper—costing approximately $350,000 to $400,000 each. The last device uses electromagnetic radiation. It offers chemical-specific detection capabilities but only for some of the explosives specified in FAA’s standard. The current price is about $340,000 each. FAA is funding the development of next-generation devices based on computerized tomography, which is currently used in the CTX-5000. These devices are being designed to meet FAA’s standard for detecting explosives at faster processing speeds; the target price is about $500,000 each, and they could be available by early 1998. Advanced X-ray devices with improved capabilities are also being developed. Explosives detection devices are commercially available for screening carry-on bags, electronics, and other items but not yet for screening bottles or containers that could hold liquid explosives. Devices for liquids, however, may be commercially available within a few years. Carry-on bags and electronics. At least five manufacturers sell devices that can detect the residue or vapor from explosives on the exterior of carry-on bags and on electronic items, such as computers or radios. These devices, also known as “sniffers,” are commonly referred to as “trace” detectors and range in price from about $30,000 to $170,000 each. They have very specific detection capabilities as well as low false alarm rates. One drawback to trace devices, among others, is nuisance alarms. The alarms on these devices could be activated by persons who have legitimate reasons for handling explosive substances, such as military personnel. An electromagnetic device is also available that offers a high probability of chemical-specific detection but only for some explosives. The price is about $65,000. Detecting liquid explosives. FAA is developing two different electromagnetic systems for screening bottles and other containers. A development issue is processing speed. These devices may be available within 2 years. The cost is projected to be between $25,000 and $125,000 each. Although a number of commercially available trace devices could be used on passengers if deemed necessary, passengers might find their physical intrusiveness unacceptable. In June 1996, the National Research Council, for example, reported that there may be a number of health, legal, operational, privacy, and convenience concerns about passenger-screening devices. FAA and other federal agencies are developing devices that passengers may find more acceptable. FAA estimates that the cost to provide about 3,000 of these devices to screen passengers would be about $1.9 billion. A number of trace devices in development will detect residue or vapor from explosives on passengers’ hands. Two devices screen either documents or tokens that have been handled by passengers. These devices should be available in 1997 or 1998 and sell for approximately $65,000 to $85,000 each. Another five devices under development use walk-through screening portals similar to current metal detectors. Three will use trace technology to detect particles and vapor from explosives on passengers’ clothing or in the air surrounding their bodies. Projected selling prices range from approximately $170,000 to $300,000. One of these devices will be tested at an airport in the latter part of 1996, and another device may undergo airport testing next year. Two other walk-through portals based on electromagnetic technology are in development. Rather than detecting particles or vapor, these devices will show images of items concealed under passengers’ clothing. Prices are projected to be approximately $100,000 to $200,000. Screening cargo and mail at airports is difficult because individual packages or pieces of mail are usually batched into larger shipments that are more difficult to screen. If cargo and mail shipments were broken down into smaller packages, some available technologies could be used. For example, the electromagnetic device available for checked baggage will be tested for screening cargo and mail at a U.S. airport. Although not yet commercially available, two different systems for detecting explosives in large containers are being developed by FAA and other federal agencies. Each system draws vapor and particle samples and uses trace technology to analyze them. One system is scheduled for testing in 1997. In addition, FAA is considering, for further development, three nuclear-based technologies originally planned for checked-bag screening for use on cargo and mail. These technologies use large, heavy apparatuses to generate gamma rays or neutrons to penetrate larger items. However, they require shielding for safety reasons. These technologies are not as far along and are still in the laboratory development stage rather than the prototype development stage. If fully developed, these devices could cost as much as $2 million to $5 million each. To reduce the effects of an in-flight explosion, FAA is conducting research on blast-resistant containers, which might reduce the number of expensive explosives detection systems needed. FAA’s tests have demonstrated that it is feasible to contain the effects—blast and fragments—of an internal explosion. However, because of their size, blast-resistant containers can be used only on wide-body aircraft that typically fly international routes. FAA is working with a joint industry-government consortium to address concerns about the cost, weight, and durability of the new containers and is planning to blast test several prototype containers later this year. Also this year, FAA will place about 20 of these containers into airline operations to assess, among other things, their durability and effect on airline operations. In addition to technology-based security, FAA has other methods that it uses, and can expand upon, to augment domestic aviation security or use in combination with technology to reduce the workload required by detection devices. The Gore Commission has recommended expanded use of bomb-sniffing dogs, profiling passengers to identify those needing additional attention, and matching passengers with their bags. Dogs are considered a unique type of trace detector because they can be trained to respond in specific ways to the smell of explosives. Dogs are currently being used at a number of U.S. airports. The Gore Commission has recommended that 114 additional teams of dogs and their handlers be deployed at a cost of about $9 million. On July 25, 1996, the President announced additional measures for international and domestic flights that include, among other things, stricter controls over checked baggage and cargo as well as additional inspections of aircraft. Two procedures that are routinely used on many international flights are passenger profiling and passenger-bag matching. FAA officials have said that profiling can reduce the number of passengers and bags that require additional security measures by as much as 80 percent. The Gore Commission has recommended several initiatives to promote an automated profiling system. In addition, to determine the best way to implement systemwide matching of passengers with their bags, the Gore Commission has recommended testing techniques at selected airports. Profiling and bag matching are unable to address certain types of threats. However, in the absence of sufficient or effective technology, these procedures are a valuable part of the overall security system. FAA has estimated that incorporating bag matching in everyday security measures could cost up to $2 billion in start-up costs and lost revenue. The direct costs to airlines include, among other things, equipment, staffing, and training. The airlines’ revenues and operations could be affected differently because the airlines currently have different capabilities to implement bag matching, different route structures, and different periods of time allotted between connecting flights. Addressing the vulnerabilities in the nation’s aviation security system is an urgent national issue. Although the Gore Commission made recommendations on September 9, no agreement currently exists among all the key players, namely, the Congress, the administration—specifically FAA and the intelligence community, among others—and the aviation industry, on the steps necessary to improve security in the short and long term to meet the threat. In addition, who will be responsible in the long term for paying for new security initiatives has not been addressed. While FAA has increased security at domestic airports on a temporary basis, FAA and Department of Transportation officials believe that more permanent changes are needed. Furthermore, the cost of these changes will be significant, may require changes in how airlines and airports operate, and will likely have an impact on the flying public. To achieve these permanent changes, three initiatives that are under way may assist in developing a consensus among all interested parties on the appropriate direction and response to meet the ever-increasing threat. On July 17, 1996, FAA established a joint government-industry working group under its Aviation Security Advisory Committee. The committee, composed of representatives from FAA, the National Security Council, the Central Intelligence Agency, the Federal Bureau of Investigation, the Departments of Defense and State, the Office of Management and Budget, and the aviation community, will (1) review the threat to aviation, (2) examine vulnerabilities, (3) develop options for improving security, (4) identify and analyze funding options, and (5) identify the legislative, executive, and regulatory actions needed. The goal is to provide the FAA Administrator with a final report by October 16, 1996. Any national policy issues would then be referred to the President by the FAA Administrator through the Secretary of Transportation. In recognition of the increased threat of terrorism in general, the President established a Commission on Critical Infrastructure Protection on July 15, 1996. Moreover, with respect to the specific threat against civil aviation, in the aftermath of the TWA flight 800 crash, the President established a commission headed by the Vice President on July 25, 1996, to review aviation safety, security, and the pace of modernization of the air traffic control system. The Gore Commission is working with the National Transportation Safety Board, the Departments of Transportation and Justice, aviation industry advisory groups, and concerned nongovernmental organizations. In our August 1, 1996, testimony before the Senate Committee on Commerce, Science, and Transportation, we emphasized the importance of informing the American public of and involving them in this effort. Furthermore, we recommended that the following steps be taken immediately: Conduct a comprehensive review of the safety and security of all major domestic and international airports and airlines to identify the strengths and weaknesses of their procedures to protect the traveling public. Identify vulnerabilities in the system. Establish priorities to address the system’s identified vulnerabilities. Develop a short-term approach with immediate actions to correct significant security weaknesses. Develop a long-term and comprehensive national strategy that combines new technology, procedures, and better training for security personnel. The Gore Commission was charged with reporting its initial findings on aviation security in 45 days, including plans (1) to deploy new technology to detect the most sophisticated explosives and (2) to pay for that technology. We are pleased that the Gore Commission’s September 9, 1996, report contains many recommendations similar to those we made. The commission recommended a budget amendment for fiscal year 1997 of about $430 million to implement some of the 20 recommendations made in the report. However, the commission stated that it did not settle the issue of how security costs will be financed in the long run. The commission will continue to review aviation safety, security, and air traffic control modernization over the next several months and is scheduled to issue its final report by February 1, 1997. Given the urgent need to improve aviation security and FAA’s less-than-effective history of addressing long-standing safety and security concerns, it will be important for the Congress to oversee the implementation of new security measures once they are agreed upon. Therefore, we believe that Congress should establish goals and performance measures and require periodic reports from FAA and other responsible federal agencies on the progress and effectiveness of efforts to improve aviation security. To sustain the Gore Commission’s momentum and its development of long-term actions to improve aviation security, we concluded in our classified August 1996 report that the Commission should be supported by staff composed of the best available government and industry experts on terrorism and civil aviation security. We recommended the following: In view of the short time frame for the Gore Commission and the expertise within FAA’s Aviation Security Advisory Committee’s working group, both groups should be melded together. The Vice President should make the necessary arrangements for detailing the working group to the Gore Commission. The Vice President should not restrict the purview of the working group’s proposed solutions to those within FAA’s jurisdiction but should also include proposed solutions within the intelligence and law enforcement communities’ purview that relate to aviation security. Because aviation security is an urgent issue, the President should report to the Congress, during the current congressional session, recommendations on statutory changes that may be required, including who should pay for additional security measures, whether aviation security should be considered a national security issue, and whether changes are needed in the requirement for FAA certification of explosives detection technology before mandating its deployment. In addition, we also stated that the Congress may wish to enact legislation during the current legislative session addressing such matters as who should pay for additional security, whether aviation security is a national security issue, and whether the requirement for FAA certification for explosive detection devices should be changed. In summary, Mr. Chairman, the threat of terrorism has been an international issue for some time and continues to be, as illustrated by events such as the bombing in Saudi Arabia of U.S. barracks. But other incidents, such as the bombings of the World Trade Center in New York and the federal building in Oklahoma City—have made terrorism a domestic as well as an international issue. Public concern about aviation safety, in particular, has already been heightened as a result of the ValuJet crash, and the recent TWA flight 800 crash—regardless of the cause—has increased that concern. If further incidents occur, public fear and anxiety will escalate, and the economic well-being of the aviation industry will suffer because of reductions in travel and the shipment of goods. Given the persistence of long-standing vulnerabilities and the increased threat to civil aviation, we believe corrective actions need to be undertaken immediately. These actions need a unified effort from the highest levels of the government to address this national issue. With three separate initiatives under way, the Vice President could be the focal point to build a consensus on the actions that need to be taken to address a number of these long-standing vulnerabilities. The Gore Commission’s September 9, 1996, report to the President provides opportunities for agreement on steps to improve security that could be taken in the short term. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO discussed federal efforts to protect civil aviation from terrorist acts. GAO noted that: (1) the Federal Aviation Administration (FAA) has increased aviation security procedures, but domestic and international aviation remain seriously vulnerable because nearly every major aspect of the aviation security system has weaknesses that terrorists could exploit; (2) since fiscal year 1991, FAA has invested over $153 million to develop explosives detection devices and a number of these devices are commercially available for checked and carry-on baggage, but all of these devices have some limitations; (3) there are also passenger-screening devices, but health, legal, operational, privacy and convenience concerns have been raised about these devices; (4) FAA is conducting research on blast-resistant cargo containers that could reduce the need for explosives detection devices; (5) the Presidential Commission on Aviation Security and Terrorism has recommended government purchase of some detectors for airport use, using bomb-sniffing dogs, matching passengers with their baggage, and profiling passengers; (6) Congress, the Administration, and the aviation industry need to agree and take action on the steps needed to counter terrorist threats and who will be responsible for funding new security initiatives; and (7) the government has three initiatives underway to address aviation security improvements.
|
Within INS, the Border Patrol is the agency responsible for securing the border between the ports of entry. The Border Patrol’s mission is to maintain control of the international boundaries between the ports of entry by detecting and preventing smuggling and illegal entry of aliens into the United States. In addition, in 1991, ONDCP designated the Border Patrol the primary agency for narcotics interdiction between the ports of entry. To accomplish its mission, the Border Patrol (1) patrols the international boundaries and (2) inspects passengers and vehicles at checkpoints located along highways leading from border areas, at bus and rail stations, and at air terminals. The Border Patrol uses vehicles and aircraft to patrol areas between the ports of entry and electronic equipment, such as sensors and low-light-level televisions, to detect illegal entry into the country. The Border Patrol carries out its mission in 21 sectors. Nine of these sectors are located along the southwest border with Mexico. As of September 30, 1994, about 3,747 agents were assigned to the 9 sectors, representing 88 percent of Border Patrol agents nationwide. The following other federal entities support land border control efforts between the ports of entry along the southwest border. El Paso Intelligence Center (EPIC), the nation’s principal tactical drug intelligence facility, prepares assessments on the threat of drug smuggling. Operation Alliance prepares border control strategies and coordinates drug enforcement activities of 17 federal and numerous state and local law enforcement agencies combating drug smuggling. Joint Task Force Six (JTF-6), located in El Paso, coordinates military support for drug enforcement efforts. In September 1991, ONDCP tasked Sandia National Laboratories, through INS, to do a “systematic analysis of the security along the United States/Mexico Border between the ports of entry and to recommend measures by which control of the border could be improved.” ONDCP chose Sandia because of its expertise in designing physical security systems. In January 1993, Sandia issued its report entitled Systematic Analysis of the Southwest Border. We refer to this as the Sandia study throughout our report. According to the study, to conduct its analysis, Sandia personnel visited all nine Border Patrol southwest border sectors, toured various Border Patrol facilities, and interviewed both chief patrol agents and Border Patrol agents. They viewed much of the southwest border from either the ground or the air and reviewed a number of previous studies related to border control. In addressing our objectives to (1) determine the extent of the threat from drug smuggling and illegal immigration and (2) identify ways to enhance security between the ports of entry, we interviewed intelligence officials responsible for determining the threat from drug smuggling and illegal immigration and reviewed related documentation; reviewed the Sandia study and discussed the study’s findings with its authors and various INS officials responsible for border control; reviewed EPIC, Department of State, and Operation Alliance reports to determine the threat from drug smuggling; visited the San Diego and El Paso Border Patrol sectors and discussed with sector officials their recent border control initiatives; analyzed INS data from its management information systems related to apprehensions and narcotics seizures to obtain additional information on the threat from drug smuggling and illegal immigration along the southwest border; and interviewed INS headquarters officials to determine plans for improving border security. As agreed with the Subcommittee, our focus was control of the land border between the ports of entry. We did not evaluate border control activities at the ports of entry or efforts related to smuggling by air and sea. We did not verify the accuracy and completeness of the data we obtained from INS’ management information systems. We did our work between October 1993 and September 1994 in accordance with generally accepted government auditing standards. We discussed the results of our work with the Acting Chief of the Border Patrol and other INS officials. Their comments are presented on page 27. Although the full extent is unknown, drug smuggling is a serious threat along the southwest border. The Department of State’s 1993 International Narcotics Control Strategy Report indicated that Mexico is a transit country for South American cocaine destined for the United States and a major country of origin for heroin and marijuana. According to the report, between 50 and 70 percent of the cocaine smuggled into the United States transited Mexico, entering primarily by land across the southwest border. In addition, about 23 percent of the heroin smuggled into the United States originated in Mexico. INS data showed that Border Patrol narcotics seizures along the southwest border have risen over the last few years. Between fiscal years 1990 and 1993, the number of Border Patrol narcotics seizures rose from around 4,200 to around 6,400, an increase of about 50 percent. The amount of cocaine seized nearly doubled from about 14,000 pounds in 1990 to about 27,000 pounds in 1993. According to a June 1992 Operation Alliance report, the primary smuggling route across the southwest border was by land. The report pointed out that although cocaine was the primary drug threat, followed by marijuana, the heroin threat was growing. The report stated that in spite of law enforcement agencies’ efforts to counter drug smuggling, the flow of drugs between the ports of entry along the southwest border continued due to vast open areas and a relatively low law enforcement presence. The report concluded that “our successes are insignificant when compared to the threat. Our collective efforts are currently only a minor irritant to the smugglers.” The Sandia study deemed drug smuggling a serious threat all along the southwest border. For example, the study deemed drug smuggling a serious threat in south Texas and the southern Arizona border area, which is dubbed “Cocaine Alley.” Figure 1 shows the seizure of over 1,000 pounds of cocaine by Border Patrol agents in San Diego. Figure 2 shows a panel truck stopped by El Paso Border Patrol agents (see fig. 2A), with narcotics hidden in its interior panels (see fig. 2B). Agents seized nearly 250 pounds of marijuana (see fig. 2C). Illegal immigration is also a serious threat to the United States. In 1993, we estimated that the total inflow of illegal aliens into this country in 1988 ranged from 1.3 million to 3.9 million. The major component of the inflow, 1.2 million to 3.2 million, was Mexicans crossing the southwest border, with most entering between the ports of entry. Much of the inflow represented short-term visits to the United States. In June 1994, INS estimated there were about 3.8 million undocumented migrants residing in the United States. About half of the unlawful residents entered unlawfully across the borders, while the other half entered as visitors but did not leave. The estimates were based on an analysis of INS and Bureau of the Census data and, according to INS, experts have embraced these estimates as the best available. The 1993 Sandia study characterized the southwest border as “being overrun.” For example, in the San Diego sector, the study noted that as many as 6,000 aliens attempted to enter the United States illegally every night along the first 7-1/2 miles of border beginning at the Pacific Ocean. One of the reasons given in the study for this situation was that most of the border fencing in the San Diego sector and other urban areas was “poorly maintained” and “totally ineffective” (see fig. 3). However, as discussed on page 15, INS recently completed a new fence in the San Diego sector and plans additional fencing in other sectors. Border Patrol apprehensions along the southwest border declined between 1986 and 1989 but, although still below the 1986 level, apprehensions have gradually risen since then (see fig. 4). Figure 5 illustrates the prominence of the San Diego and El Paso sectors as border-crossing locations. In fiscal year 1993, these two sectors accounted for two-thirds of the 1.2 million southwest border apprehensions. Although the southwest border is approximately 1,600 miles long, much of it is difficult to cross by foot or vehicle due to rugged terrain, desert, or natural barriers such as some portions of the Rio Grande River. Our analysis of INS data showed that in fiscal year 1992 over half of all southwest border apprehensions occurred along only 18 of the 1,600 border miles—13 miles along the border between San Diego and Tijuana, Mexico, and 5 miles along the border between El Paso and Ciudad Juarez, Mexico. However, as we discussed on pages 23 to 25, recent border control initiatives in San Diego and El Paso appear to have rerouted some illegal immigrants to other southwest border areas. Unless border control efforts become more effective, illegal immigration is expected to increase. In September 1993, we reported that the flow of illegal aliens across the southwest border is expected to increase during the next decade because Mexico’s economy is unlikely to absorb all of the new job seekers that are expected to enter the labor force. The Border Patrol’s traditional tactic of discouraging illegal entry has been to apprehend aliens once they have entered the United States. According to the Sandia study’s authors, this tactic was inefficient and diminished the Border Patrol’s ability to control the border. In addition, the authors said the only good border control strategy is one that prevents people from crossing the border. The study concluded that the way to prevent illegal entry is to impose “effective barriers on the free flow of traffic.” The study noted that where it is not possible or practical to keep drugs and illegal aliens from entering the United States, they should be stopped at the earliest opportunity. In addition, the Sandia study concluded that “control of the illegal alien and drug traffic can be gained” and recommended that the Border Patrol change its tactics from apprehending illegal aliens after they have entered the United States to preventing illegal entry into the United States. A goal of a “prevention” strategy would be to significantly increase the difficulty of crossing the border illegally. The Sandia study concluded that single barriers, which had been used thus far, had not proven effective in preventing either drugs or illegal aliens from entering the country. Consequently, the study recommended (1) multiple lighted barriers in urban border areas to prevent the entry of large volumes of drugs and illegal aliens, with patrol roads between the barriers and (2) enhanced checkpoint operations to prevent those drugs and illegal aliens that succeeded in crossing the border from leaving the border area. (See fig. 6 for an artist’s illustration of the Sandia study’s proposed three-fence barrier system.) According to the Sandia study, multiple barriers in urban areas would provide the Border Patrol a greater ability to (1) discourage a significant number of illegal border crossers, (2) detect intruders early and delay them as long as possible, and (3) channel a significantly reduced level of traffic to places where border patrol agents can adequately deal with it. The Sandia study recommended multiple barriers along approximately 90 miles, or less than 6 percent of the southwest border. Because of rugged terrain, segments of the southwest border cannot be controlled at the immediate border. The alternative the Sandia study recommended for these areas is to use highway checkpoints to contain those aliens who cross the border illegally. The study recommended more checkpoints be established and that all operate full time. The Border Patrol’s use of part-time checkpoints allows violators to cross unobserved after the checkpoint is closed. Except for the proposed multiple-fence system, many of the Sandia study’s recommendations were not new and, according to Border Patrol officials, had been made previously by their own personnel. For example, a January 1989 study recommended many of the same measures such as barriers, checkpoints, and enhanced electronic surveillance equipment. The study was conducted by a retired head of the Border Patrol for the Federation for American Immigration Reform. The Sandia study estimated it would initially cost an additional $260 million to implement its recommendations with annual recurring costs of about $69 million. Most of the initial costs are associated with physical barriers and checkpoints. Ultimately, implementing the Sandia study’s recommendations may require only a slightly larger Border Patrol force. According to the study, as physical barriers and checkpoints were completed, the number of Border Patrol agents required would increase. However, the study noted that as control was gained at the border, the number of agents could be allowed to decrease to a number not significantly larger than the 3,640 agents that were deployed along the southwest border when the study began in December 1991. The Border Patrol officials we spoke with (including the acting chief, acting deputy chief, San Diego and El Paso chief patrol agents, and a regional Border Patrol official) all agreed with the Sandia study’s conclusion that the Border Patrol should focus on preventing illegal entry rather than on apprehending illegal aliens. In addition, officials of EPIC, Operation Alliance, JTF-6, and the mayor and police officials of El Paso support the concept of trying to prevent entry rather than apprehending aliens. This strategy is also in line with our past positions on controlling illegal immigration. In June 1993, we testified before the House Subcommittee on International Law, Immigration and Refugees, Committee on the Judiciary, that “the key to controlling the illegal entry of aliens is to prevent their initial arrival.” Major Border Patrol initiatives in the San Diego and El Paso sectors are consistent with the Sandia study’s findings. Both sectors have begun initiatives that focus on preventing illegal entry rather than on apprehending aliens. In 1990, the San Diego sector’s chief patrol agent began an initiative to erect physical barriers, primarily to deter drug smuggling. With the assistance of JTF-6, the San Diego sector installed 10-foot welded steel fencing along approximately 14 miles of border where sector officials believed the majority of drugs and illegal aliens crossed within the sector. The new fence, completed in late 1993, is substantially stronger than previous chain link fencing. JTF-6 is also installing high-intensity lights and a second and third fence at strategic locations along the same 14 miles. As of February 1994, JTF-6 had installed lights along about 4-1/2 of the 13 miles. The Sandia study recommended similar measures. For example, the study recommended that the sector erect multiple lighted physical barriers along the same stretch of border where the sector erected its new fence. Before September 1993, like San Diego, the El Paso sector’s strategy emphasized apprehending aliens rather than preventing illegal entry. However, as apprehensions increased so did the opportunities for confrontation between illegal aliens and El Paso Border Patrol agents. These increased opportunities for confrontation led to allegations of abuse against agents. Under the sector’s apprehension strategy, El Paso’s chief patrol agent told us that the border area was in “complete chaos.” The chief estimated there were up to 8,000 to 10,000 illegal border crossings daily, and only 1 out of 8 aliens was apprehended. The apprehension strategy also created several problems in the community. El Paso citizens and others complained about this approach in meetings with the sector’s chief patrol agent. They believed that the Border Patrol did not try to prevent entry but, in fact, used the increased numbers of apprehensions as a primary factor in justifying its budget. Some local residents felt their civil rights were being violated by the Border Patrol. For example, students and teachers at a local high school filed a federal lawsuit to stop harassment after El Paso sector agents confronted a coach believing he was an alien smuggler.Illegal aliens also had a significant impact on the city’s crime rates. El Paso police officials estimated that undocumented aliens committed 75 to 80 percent of all auto thefts, as well as many burglaries. The Mayor of El Paso told us that illegal immigration costs the city about $30 to $50 million per year. In light of these problems, El Paso’s chief patrol agent began an initiative in September 1993 to change the sector’s border control strategy to one of preventing illegal entry. The sector stationed all available agents immediately at a 20-mile stretch of the border in highly visible Border Patrol vehicles. The primary goal of the new strategy—Operation Hold-the-Line—was preventing significant numbers of aliens from entering the El Paso metropolitan area. Those who still tried to cross the border illegally were routed to less populated areas where they could be more easily apprehended. The El Paso sector’s goal of preventing illegal entry is similar to the one recommended by the Sandia study, although the tactics are different. Sandia recommended multiple physical barriers to prevent entry; the sector employs agents as a human barrier. However, the sector eventually plans to construct additional lighted fencing, which is generally consistent with the Sandia study recommendations. Preliminary results in San Diego and El Paso suggest that the prevention strategy has reduced illegal entry in these sectors. Other benefits include less border crime, less confrontation between Border Patrol agents and illegal aliens, and strong public support. Although the San Diego sector’s border control initiative has not been fully implemented, indications are that the new tactics are reducing the number of aliens crossing the border illegally in the San Diego area. As shown in figure 7, sector apprehensions were down 20 percent in fiscal year 1994 compared to 1992 and dropped below 1990 levels, the year the sector began implementing its new border control tactics. Apprehensions decreased even though the sector increased the amount of time spent on border enforcement nearly 41 percent between 1990 and 1994. Also, apprehensions at highway checkpoints away from the border declined 24 percent between fiscal years 1990 and 1993 even though the amount of time spent performing traffic checks increased 22 percent. During our review, we toured the most heavily trafficked portion of the San Diego sector border and found visible evidence of the new tactics’ effect on illegal border crossing. As figure 8A shows, before the new border control tactics, hundreds of aliens would line up along the U.S. side of the border during daylight hours, waiting for an opportunity to go northward. However, as illustrated in figure 8B, after the new border patrol tactics were initiated, large groups of aliens no longer waited to cross during the day, which according to a Border Patrol official is typical. Also, as shown in figure 8C, formerly there were large gaps in border fencing allowing aliens to easily cross the border. However, figure 8D shows that these gaps in the fencing have now been closed. In addition, according to San Diego sector officials, violent crime and confrontations between Border Patrol agents and illegal aliens have been reduced because the fencing has prevented large groups of aliens from gathering. For example, murders in the border areas adjacent to the fencing dropped from nine in 1990 to none between 1991 and June 1994. According to the sector’s chief patrol agent, as of February 1994, there had not been any incidents during the last 2 years where San Diego Border Patrol agents had used deadly force against illegal aliens. Also, reported incidents of assaults, rapes, and robberies in this area have declined. El Paso sector officials cited several indications that the sector’s new prevention strategy is working. For example, according to the Border Patrol, the number of aliens attempting to illegally cross the border through the El Paso sector has decreased significantly. According to the chief patrol agent, before Operation Hold-the-Line, there were up to 10,000 illegal border crossings daily. In February 1994, the sector estimated that only about 500 people a day were illegally crossing the border. A March 1994 sector intelligence report indicated the new strategy had deterred many aliens in Mexico’s interior from coming to the El Paso border area. There has been a sharp drop in El Paso sector apprehensions since implementation of its new strategy. As figure 9 shows, the El Paso sector’s illegal alien apprehensions in fiscal year 1994 were down 72 percent compared to fiscal year 1993. Two factors influencing this decrease are the deterrent effect of the new border control strategy and, as discussed on pages 23 to 25, the rerouting of some illegal aliens to other southwest border areas. According to sector officials, many illegal border crossers try to leave El Paso via the airport. With the implementation of the prevention strategy in the El Paso sector, the number of apprehensions made at El Paso’s International Airport was significantly reduced, indicating that fewer aliens are crossing the border illegally in El Paso. According to INS data, in fiscal year 1993, the sector averaged about 3,700 apprehensions a month at the airport. As of June 1994, the sector was averaging about 700 apprehensions a month, an 81-percent decrease. The El Paso public strongly supports the sector’s new strategy. A poll taken in February 1994 showed 84 percent in favor of the sector’s strategy.Complaints against the Border Patrol from both local residents and illegal aliens have decreased since the start of Operation Hold-the-Line. According to sector officials, only one allegation of abuse was made in the first 5 months of the operation. Although they did not have any specific data, local police officials said complaints to the police department of harassment by Border Patrol officers are “way down.” Police officials also attribute a drop in certain crimes to Operation Hold-the-Line. For example, there were nearly one-third fewer burglaries and one-fourth fewer motor vehicle thefts in the 3 months after the operation began in September 1993 than in the same 3 months in 1992. Two studies also concluded that Operation Hold-the-Line has been successful in deterring illegal immigration in El Paso. A December 1993 study of Operation Hold-the-Line by the Center for Immigration Studies concluded that the operation “has proven to be successful” and the new preventative deployment was “both more humane and more effective.” According to this study, the operation represented a viable long-term approach to more successful border control. A July 1994 study requested by the U.S. Commission on Immigration Reform found that the operation significantly reduced illegal crossings and had resulted in less crime and fewer allegations against Border Patrol agents in El Paso. In addition, the study found that the strategy has broad public support. However, the study also found that the redeployment of agents and longer work shifts have eroded morale among agents, and the strategy is labor-intensive. Any expansion without additional agents would stretch present resources. Although successful in significantly reducing illegal entry into El Paso, according to sector officials, the new strategy weakened some sector operations. For example, the El Paso sector took important resources from checkpoint operations resulting in some checkpoints being closed over 50 percent of the time. The Sandia study, however, recommended that El Paso increase the number of checkpoints and operate all checkpoints 24 hours a day. The San Diego and El Paso sectors’ initiatives appear to have rerouted drugs and illegal aliens to other parts of the southwest border. For example, the July 1994 study of Operation Hold-the-Line found that the operation had less of an effect on those illegal aliens headed for the interior of the United States. These aliens apparently adapted to the prevention strategy by finding new routes into the United States. In addition, interviews with apprehended illegal aliens have revealed that smugglers are now telling those traveling from the interior of Mexico that it is easier to cross into Nogales, AZ, rather than into San Diego or El Paso, according to Tucson’s Deputy Chief Patrol Agent. In addition, according to the deputy, some smugglers are reported to be moving their operations from San Diego to Nogales. A comparison of Tucson and El Paso sector apprehensions appears to support the premise that the recent San Diego and El Paso initiatives have increased illegal entry through other southwest border sectors. As figure 10 shows, since the start of the initiative in the El Paso sector, Tucson sector apprehensions have increased about 50 percent (about 93,000 in fiscal year 1993 compared to 139,000 in fiscal year 1994). El Paso apprehensions, on the other hand, dropped 72 percent (about 286,000 to about 80,000 over the same period). Another indication that illegal alien entry may be moving to other sectors is that while the San Diego sector’s fiscal year 1993 apprehensions were 6 percent lower than fiscal year 1992, apprehensions in the remaining southwest border sectors increased about 17 percent (see fig. 11). Drug trafficking has also apparently been affected. According to EPIC’s December 1993 Monthly Threat Brief, El Paso’s Operation Hold-the-Line has lead to changes in smuggling methods. Instead of fording the Rio Grande River, some smugglers have attempted to move drugs through ports of entry and to areas east and west of El Paso, around the sector’s 20-mile line of agents. According to a San Diego sector official, the new fence has virtually eliminated the number of drug and alien smugglers driving across the border in the San Diego area. However, the sector has noticed an increase in drug smuggling in the mountainous areas east of San Diego. In addition, the amount of cocaine seized in the El Centro sector, the sector adjacent to San Diego, increased dramatically from 698 pounds in fiscal year 1991 to nearly 18,000 pounds in fiscal year 1993. In August 1994, the INS Commissioner approved a national Border Patrol strategic plan for gaining control of the nation’s borders. The strategy focuses on preventing illegal entry and builds on the success INS has reportedly had in San Diego and El Paso. INS plans to put more agents along the border and use more lighting, fencing, and other barriers. On the basis of the national border control strategy, each southwest border sector developed its own strategy identifying specific actions that need to be taken. INS plans to use a phased approach to implementing its border control strategy. In its first phase, INS plans to focus its resources in the two sectors where most illegal immigration has traditionally occurred—San Diego and El Paso. As border control is improved in San Diego and El Paso, INS anticipates that other areas will experience an increase in illegal entry. Therefore, the second phase targets the Tucson sector and the south Texas area. The third phase targets the rest of the southwest border, and phase four targets the rest of the U.S. border. INS has identified certain indicators that it plans to use in each of these phases to determine whether its efforts are successful. The proposed indicators include (1) an eventual reduction in apprehensions and recidivism, (2) an increase in attempted fraudulent admissions at ports of entry, (3) a shift in the flow to other sectors, and (4) fewer illegal immigrants in the interior of the United States. To achieve border control, the strategy recognizes the need to coordinate with other INS programs as well as other federal agencies such as the Department of Defense, Customs Service, and the Drug Enforcement Administration, as well as state and local law enforcement agencies. INS officials told us that it will take several years to implement the strategy and that INS did not have a specific time frame or cost figures for these improvements. INS officials believe that technology improvements, such as improved fencing and surveillance cameras, would make border control strategies more effective. According to the Acting Chief of the Border Patrol, these improvements would reduce the need for significant numbers of additional agents. INS plans to closely monitor the strategy’s progress to determine the appropriate mix of personnel and other types of resources needed to gain control of the U.S. border. We believe the new national border control strategy shows promise for reducing illegal entry since the strategy (1) builds on the reported success the San Diego and El Paso sectors have had in reducing illegal immigration, (2) is consistent with recommendations made in previous comprehensive studies conducted by border control and physical security experts, and (3) has widespread public and government support. However, since it will take several years to implement the strategy, it is too early to tell what impact it will eventually have on drug smuggling and illegal immigration along the southwest border. On October 25, 1994, we met with the Acting Chief of the Border Patrol and other INS officials to discuss the results of our work. These officials generally agreed with the information and conclusions presented in this report. They emphasized the importance of sustained financial support to fully implement the national border control strategy. We plan no further distribution of this report until 30 days from its issue date, unless you publicly release its contents earlier. After 30 days, we will send copies of this report to the Attorney General, the Commissioner of the Immigration and Naturalization Service, the Director of the Office of National Drug Control Policy, and other interested parties. We will also make copies available to others upon request. Appendix I lists the major contributors to this report. If you need additional information on the contents of this report, please contact me on (202) 512-8757. Michael P. Dino, Evaluator-in-Charge James R. Russell, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed U.S. efforts to secure the southwest border, focusing on: (1) the extent to which border security is threatened by drug smuggling and illegal immigration; and (2) ways the United States can enhance security between ports of entry. GAO found that: (1) although the full extent of drug smuggling and illegal immigration is unknown, both pose serious security threats along the U.S. southwest border; (2) despite U.S. law enforcement efforts, the flow of cocaine and illegal immigrants continues and is expected to increase; (3) a 1993 study on ways to enhance security along the southwest border between ports of entry recommended that the Border Patrol emphasize entry prevention instead of apprehension, construct physical barriers, and set up additional highway checkpoints to prevent entry; (4) although there is increased interest in a national entry prevention strategy, many officials believe that drug smuggling and illegal immigration activities have merely been rerouted to other southwest border areas where enforcement is less effective; (5) the Immigration and Naturalization Service (INS) plans to implement a national strategy that focuses on preventing illegal entry; and (6) it is too early to assess what impact the new INS strategy will have on drug smuggling and illegal immigration along the southwest border.
|
Many factors affect why some students graduate from college and our review would not be complete without first considering the extent to which students with different characteristics advance to higher levels of education. Many students will complete their education without ever having enrolled in college. Figure 1 shows some of the differences in educational participation and attainment for a group of students who were followed over a 12-year period starting in the eighth grade. We reported in February 2002 that low-income, black, and Hispanic students complete high school at lower rates than other students. Students from these groups who graduate from high school also enroll in college at lower rates than their peers, even though the overall rate at which students enter college directly from high school has been increasing. According to research, factors such as family income and parents’ educational attainment influence students’ expectations about college. Low-income students and students from families in which neither parent has earned a bachelor’s degree were less likely to expect to finish college and ultimately enrolled at lower rates than other students. Academic preparation was also cited as a factor affecting postsecondary enrollment. Low-income, black, and Hispanic high school graduates were less likely to be well prepared academically to attend a 4-year college. Even among those who were qualified for college, however, low-income and Hispanic students were less likely to take college entrance examinations and apply for admission, two necessary steps for enrolling in a 4-year institution. There are a variety of postsecondary options for students after high school. Over 15 million students were enrolled in some type of higher education in the fall of 2000. Most students were enrolled in degree- granting 2-year or 4-year institutions. After considering their academic qualifications, students can choose to apply to institutions with varying levels of selectivity. Community colleges, for example, provide postsecondary opportunities for students who might not have the qualifications to start at most 4-year institutions. Additionally, students may wish to choose an institution based on its mission. For example, Minority Serving Institutions are recognized by statute, in part, for their mission to educate minority students. The institutions students attend have differing graduation rates. Institutional graduation rates may vary based upon such factors as the mission, selectivity, and type of institution. For example, institutions that focus on providing postsecondary opportunities to disadvantaged students—addressing Education’s goal of increasing participation in higher education—may have lower graduation rates than institutions that do not serve many disadvantaged students. To ensure that students and their parents have some information about how colleges are performing with respect to graduating their students, Congress passed the Campus Security and Student Right-to-Know Act. This act, as amended, requires that institutions participating in any student financial assistance program under Title IV of the Higher Education Act of 1965 disclose to current and prospective students information about the graduation rates of first-time, full-time undergraduate students. The law requires that institutions report the percentage of students who graduate or complete within 150 percent of the normal program completion time. This would mean that 4-year institutions would track groups of entering students over a 6-year period, and 2-year institutions would track groups of entering students over a 3-year period. While information collected as part of this act is the principal federal measure available to hold institutions accountable for their performance in graduating their students, there are currently no federal sanctions or incentives associated with college graduation rates. As part of discussions with the higher education community, Education has held panel discussions with student-aid experts, state officials, and business leaders, among other participants, about improving accountability. Four-year institutions calculate their graduation rate by determining the proportion of first-time, full-time students who enroll in a given year and have graduated from the same institution within a 6-year period. Students who have not graduated from the institution where they first enrolled by the end of the 6-year period are classified as not having finished a degree, even if they transferred and completed a degree at another institution. Data from Education’s 1995-96 Beginning Postsecondary Students (BPS) study—a longitudinal study which followed the retention and degree completion of students from the time they enrolled in any postsecondary institution over a 6-year period—illustrates how graduation rates are understated due to this treatment of transfer students. Figure 2 shows the completion status of the nearly 1.4 million students who started their postsecondary education at a 4-year institution in 1995-96 (no transfers into 4-year institutions from 2-year institutions or certificate programs were included). Over one-quarter of the students who started at a 4-year institution transferred from their first institution to another institution. If only those who completed a bachelor’s degree at the first institution of attendance are considered, the graduation rate is 51 percent. However, an additional 8 percent transferred to another institution and completed a bachelor’s degree within the 6-year period. Over half of students who enrolled in a 4-year college or university completed a bachelor’s degree within 6 years of beginning postsecondary education, according to our analysis of BPS data. However, background characteristics such as being black or a first-generation college student were associated with lower rates of completion. Whereas students were more likely to complete a bachelor’s degree within 6 years if, among other things, they had a more rigorous curriculum in high school, attended college full-time, were continuously enrolled, worked less than 20 hours per week, or did not transfer. After controlling for other factors, we found that disadvantaged students were no less likely to complete a bachelor’s degree than other students. Notwithstanding this fact, as we have noted, students from disadvantaged backgrounds are less likely to attend college in the first place. For various reasons, not all students who enroll in college will ultimately attain a degree. Based on Education’s 1995-96 BPS study, 52 percent of the estimated 1.8 million students who enrolled in a 4-year institution at some point during the subsequent 6-year period (including approximately 450,000 students who transferred from a less than 4-year institution) completed their bachelor’s degree. Of the 48 percent of students who had not attained a bachelor’s degree, nearly 14 percent were still enrolled in a 4-year institution at the end of the 6-year period, as shown in figure 3. See appendix II for completion rates by characteristics and appendix III for descriptions of the variables used in our analysis and a discussion of their levels of significance. Of the background characteristics we analyzed, being black or a first- generation college student was associated with lower completion rates. Students with either of these characteristics were about a third less likely to complete college as students without these characteristics. The completion rate for black students was 38 percent compared with 55 percent for both white and Asian students. As for students who had at least one parent with a bachelor’s degree, their rate of completion was 59 percent compared with 43 percent for students who were first- generation college students. Being a first-generation student affected completion regardless of race. For example, first-generation white students were no more likely to complete college than first-generation black students. Students who had a more rigorous high school curriculum and achieved better grades in high school and during the first year of college were more likely to complete college. About 80 percent of students who had the most rigorous high school curriculum completed college compared with 47 percent who had the least rigorous curriculum. Additionally, the higher the grades a student earned both in high school and in the first-year of college, the higher the likelihood of completion. Regarding first-year college grade point average, about 71 percent of students who earned higher than a 3.0 had completed college compared with 51 percent who earned between a 2.0 and 3.0. Students were more than twice as likely to complete college for every one-point increase in first-year college grade point average. Decisions students make regarding attendance, participation in collegiate clubs, and work had varying effects on completion. Students who were continuously enrolled during their studies were more than 6 times as likely to graduate than students who experienced one or more breaks from enrollment Additionally, students who attended college full-time were more than twice as likely to graduate as students who attended part-time or some combination of part-time and full-time, all other factors equal. Students who reported participating in collegiate clubs were one and one- half times as likely to graduate as students who did not participate. Less than half of students reported such participation. Students who worked 20 or more hours per week were less likely to complete a bachelor’s degree than students who did not work. However, working less than 20 hours per week was not associated with lower completion rates. Figure 4 illustrates bachelor’s degree completion rates by the number of hours worked per week. Transferring between institutions was also associated with a lower likelihood of completion in that students who transferred were a little less than half as likely to complete as students who did not. About 69 percent of students who started at a 4-year institution and did not transfer attained a bachelor’s degree compared with 47 percent of students who started at a 4-year institution and transferred to another 4-year institution. The rate of completion for students who started at a 2-year institution and transferred to a 4-year institution was roughly half of those who started at a 4-year institution and did not transfer. Figure 5 illustrates the bachelor’s degree completion rate after 6 years according to type of institution first attended and transfer status. After controlling for other factors, we found that disadvantaged students were no less likely to complete a bachelor’s degree than other students. However, as we have noted, students from disadvantaged backgrounds are less likely to attend college in the first place. While states and 4-year colleges and universities are employing various methods to foster bachelor’s degree completion, information on the effectiveness of these efforts is limited. Over two-thirds of the states responding to our survey reported having at least one effort in place to foster bachelor’s degree completion. Half the states indicated additional actions they would like to take to foster bachelor’s degree completion, but cited state budget constraints as a factor preventing them from moving forward. As a way to foster bachelor’s degree completion, 4-year colleges and universities we visited were engaged in activities designed to improve the learning experience for students and strengthen support of students. In some cases, officials attributed increases in retention to their efforts to foster completion. Thirty-four of the 48 states responding to our survey, including the 5 states we visited—Florida, Maryland, Oregon, Texas, and Virginia—reported having at least one effort in place to foster bachelor’s degree completion. Most of these states reported efforts that fell into three broad categories: (1) efforts to increase the overall number of college graduates by increasing the number of students entering postsecondary education; (2) efforts to help colleges improve their performance in retaining and graduating students; and (3) efforts to help individual students remain in college and to encourage timely completion for these students. While states reported that almost half of their approaches have been evaluated, the instances where states provided specific evaluation results were limited. Half of the states indicated that there were additional actions they would like to take to foster bachelor’s degree completion, but cited state budget constraints as a factor preventing them from moving forward. Nineteen states have efforts to increase the number of bachelor’s degrees awarded by increasing the number of students enrolling in postsecondary education. This approach includes efforts such as increasing the number of students ready for college, educating students and parents about college requirements and costs, and providing financial assistance to help cover college costs. Increasing student readiness for college. Some states have efforts to improve the academic readiness of students so that more students have the opportunity to attend college. Kentucky has a P-16 partnership that focuses on aligning standards between high school and college to ensure students are academically prepared for college. Kentucky reported in our survey that the state had aligned high school graduation standards with college admissions standards by creating a single high school curriculum for all students. The state has adopted an online diagnostic test designed for sophomores and juniors to test their readiness for college mathematics in time to improve these skills and avoid remedial placement in college. Oregon has implemented proficiency-based admissions standards that specify certain knowledge and skills students should demonstrate for admission to its public universities. The standards are intended to provide more accurate information about student readiness for college and encourage students to choose challenging coursework that will prepare them for college. Oklahoma uses assessments in the eighth and tenth grades to provide students feedback on their progress in preparing for college. In addition to student feedback, colleges use assessment results to improve curricula and instruction. The state reported that since this effort began 10 years ago there have been increases in the number of high school students taking college preparatory courses, particularly among black students. Educating students and parents about college. To increase the numbers of students enrolling in postsecondary education and ultimately completing a bachelor’s degree, some states are focusing on raising awareness among students and parents about the benefits and costs of postsecondary education. Texas, for example, has a plan that centers on counseling students and their parents about what is necessary to enroll in postsecondary education. The state provides information on the benefits of postsecondary education, the academic preparation necessary for enrolling, and the costs of attending, including information about available financial aid and how to qualify. These efforts are designed to support its goal of increasing its enrollment from just under 1 million students in 2000 by adding 500,000 new college students by 2015. Providing financial aid for college. Financial assistance is another way states seek to increase the number of students enrolling in college. Several states have programs that provide monetary assistance to academically qualified students based on academic merit, financial need, or some combination of the two. For example, Oklahoma provides free tuition at public institutions for students whose families have incomes below $50,000 and meet other requirements, including completing a prescribed high school course of study with at least a 2.5 grade point average. Oklahoma reported that the performance of students in this program has exceeded that of the general student population. Another example is the West Virginia Higher Education Grant Program, which provides assistance to academically qualified, but needy students who attend college in West Virginia or Pennsylvania. West Virginia’s evaluation of the program revealed that grant recipients had higher graduation rates than students receiving other types of financial aid and students who received no financial aid. Many states reported efforts to improve the performance of colleges in the areas of retaining and graduating their students. Such efforts include promoting accountability for colleges by collecting and, in some instances, publishing retention and graduation rates. States also promote accountability by tying funding—mainly for public colleges—to performance. States are also sharing information with colleges about retention strategies to foster increased rates of bachelor’s degree completion. Promoting accountability for colleges. In order to hold colleges and universities accountable for their performance in the areas of student retention and graduation, states must first collect consistent information from these institutions. Three-fourths of the states that responded to our survey reported that they collect data that allow them to calculate and track retention and graduation rates for individual institutions and across the state. Specifically, 24 of these states reported that they collect enrollment and graduation data on individual students from public institutions only, and 9 states reported collecting these data from both public and private institutions in their states. Having these data allows the state to calculate retention and graduation rates for each institution and the system as a whole. Additionally, because the institutions provide the state with individual student records, the state can track the educational progress of a student who attends more than one institution. This enables the states to include transfer students in their graduation rate. The data are limited to student transfers within the state. Eighteen states reported that they promote accountability by publishing the performance of their colleges and universities on measures, including retention and graduation rates because some officials believe that this motivates colleges to improve their performance in those areas. In Virginia, a state that uses multiple accountability measures, officials told us that institutions are not compared with other institutions in the state with respect to the various performance measures. Rather, each institution works with the state to identify a national peer group of institutions with similar characteristics with which to be compared. In this way, institutions can see whether their performance is on par with institutions that have similar missions and serve similar types of students. In addition to measuring retention and graduation rates, Virginia requires its public institutions to measure and report on certain student learning outcomes to demonstrate the value of each institution to its students. Nine states reported accountability efforts that have financial implications for colleges and universities to encourage them to graduate their students in a timely manner. These efforts include linking a portion of state funding to an institution’s performance on multiple measures or making incentive payments to institutions based on their performance in the areas of retention and completion. Tennessee has a performance-funding program in which institutions earn about 5 percent of their state funding for performance on multiple indicators, such as retention and graduation. In another variation, Pennsylvania provides a financial bonus to any 4-year institution in the state, whether public or private, that graduates more than 40 percent of in-state students within 4 years. Sharing retention strategies. Five states reported efforts to improve institutional performance by sharing information among state and college officials about strategies to help students remain in college. For example, the Oregon University System formed a retention work group to provide a forum for developing and sharing campus initiatives to enhance retention. The group has used annual systemwide and institutional data on retention and graduation to identify areas that need to be addressed to increase retention. The group looks at retention efforts that seem to be working on specific campuses and shares information with other campuses. As a result of its work with tribal governments to increase retention of Native American students, the system developed a Native American resource guide that includes information about topics such as outreach and retention efforts of colleges, financial assistance, childcare programs, and community college transfer procedures. Officials in Oregon attribute the increases in graduation rates at most campuses in the system to the work of this group. Twenty-two states reported efforts directly aimed at helping students remain in college and encouraging timely completion for these students. Many such state-level programs provided funding to support efforts carried out by individual colleges, such as programs that provide academic and social support directly to students. Other efforts seek to ease student transfers among colleges, utilize technology to help students complete their degree, or include financial incentives to encourage students to complete their bachelor’s degrees in a timely manner. Funding college programs that provide support services for students. Several states provide funding for college-run programs designed to support students in need of assistance. For example, through its Access and Success program, the Maryland Higher Education Commission provides funds to colleges and universities for the operation of programs to increase retention and graduation rates of their undergraduates. The colleges have used these funds to, among other things, operate summer bridge programs that acclimate students to college the summer before they enroll and provide advising, tutoring, and counseling services to students who are already enrolled. New York’s Collegiate Science and Technology Entry Program, aimed at increasing the number of low-income students who pursue careers in math, science, technology, or health-related fields, provides funding for services such as enriched science and math instruction, graduate school test preparation, and career awareness. Facilitating transfer among institutions. Seven states reported efforts to facilitate transfer from one college to another as an approach to foster bachelor’s degree completion. Officials in Florida told us that establishing policies that help students transfer from community colleges to 4-year institutions was important because the community college system is considered the point of entry for most college students in the state. Florida has common course numbering for all public institutions in the state and requires public institutions to accept transfer credits for any course they offer that a student completes at another institution. Officials told us this policy prevents students from needlessly duplicating coursework, saving both the state and students money, along with reducing the time it takes to complete a degree. Florida also has a statewide policy that guarantees admission to the state university system as a junior for any student who completes an Associate of Arts degree. Officials in Florida told us that without these policies it would be difficult for community college students or other transfer students to complete their degrees. They acknowledged, however, that these policies could be at odds with encouraging timely degree completion because they make it easier for students to exit and reenter postsecondary education. Using distance learning. A few states reported using technology to enhance access and make it easier for students to complete a degree. Kentucky, for instance, has a virtual university and library that offers credit courses and academic advising for those who work or have family situations that may not allow them to come to campus. This also aids on- campus students who need greater course availability. Students taking advantage of these electronic offerings have grown from fewer than 300 students in 1999 to nearly 10,000 in 2002. Using financial incentives to encourage students’ timely completion. Some states have financial aid programs to encourage timely degree completion. These programs may have time limits and/or may require students to earn a minimum number of credits each year for participation. For example, the University of Alaska Scholars Program, targeted at the top 10 percent of high school graduates, offers financial aid for eight semesters provided that the scholar remains in good standing. Other states have programs that impose financial penalties if students repeat coursework or take too long to graduate. Florida’s in-state students must pay the full tuition rate—without state subsidies—for any courses they repeat more than once. Utah requires that students who enroll for credits in excess of 135 percent of what is usually needed for a degree pay higher tuition for the excess credits. Texas passed a law designed to encourage students to minimize the number of courses they take to complete their degree. State residents who complete their coursework and degrees in the state with no more than three attempted hours in excess of the minimum required for graduation are eligible to apply for a $1,000 tuition rebate from their institution. Officials told us that about 1,500 students received tuition rebates in the 2001-2002 academic year. Twenty-four states listed at least one area in which they would like to do more to increase bachelor’s degree completion rates. Many of these desired actions dealt with increasing financial aid for students and increased financial support to colleges to help their students succeed. Some wanted to offer special funding for colleges that perform well in certain areas related to retention and college completion. Others wanted to improve preparation of high school graduates for college or improve transitions from one level of education to another. Almost without exception, the states cited state budget constraints as a significant factor preventing them from moving forward with these actions. Our visits to 11 colleges and universities in five states showed that initiatives in these institutions cluster around two main approaches to foster bachelor’s degree completion: (1) enhancing the learning experience by creating smaller learning communities that foster greater connections to the institution and (2) strengthening support of students to promote academic success. In some cases, officials attributed increases in retention rates or higher retention rates for certain groups of students to these approaches. Nearly all of the colleges and universities we visited were engaged in efforts designed to enhance the learning experience for students, primarily by creating smaller communities that foster greater connections to the institution. These approaches aim to increase students’ engagement in academics and provide them with a network of faculty and other students who can support them academically and socially. These approaches are employed both in and out of the classroom, and most focus on easing the transition from high school to college for first-year college students. Linking courses. Several of the colleges we visited are trying to enhance the learning environment by giving students a small classroom experience that will provide them greater opportunities to connect with faculty and their peers, not unlike the experience they would have had in high school. For example, Texas A & M University at Corpus Christi, a Hispanic Serving Institution, requires all full-time, first-year students to enroll in learning communities—clusters of three or four classes in which the course content is linked. Students are typically enrolled in a large lecture course with 150 or more students and two other courses with 25 or fewer students from the lecture course. In addition to covering course content, instructors help students learn how to succeed in their first year of college, helping with topics such as study skills on an as needed basis. Portland State University provides its students smaller learning communities in the freshman and sophomore years through its University Studies program. According to officials there, the university developed the program in 1994 to address disappointing retention rates from the freshman to sophomore year. Officials told us that, because few students live on campus, the university has to create opportunities for students to connect to the campus via the classroom. The required freshman and sophomore courses are comprised of 35-40 students who meet as a whole with faculty and in smaller mentor sessions, led by upper-level or graduate students. Officials told us they think the upper-level students who serve as peer mentors for the freshman classes are particularly helpful for many first-generation college students who attend the university and may find college more difficult to navigate. Officials at both universities reported positive outcomes for these learning programs. Specifically, at Texas A & M students withdrew from the large lecture courses at lower rates and had higher grades in these courses when taken as part of the learning community. They also attributed retention rates for first-year minority students that are on par with other first-year students to the learning communities. At Portland State, officials attributed increases in retention from the freshman to sophomore year, as well as from the sophomore to junior year, to its University Studies program. Using service learning. Connecting classroom learning to the community is another approach colleges are taking to enhance the learning experience and create a sense of belonging. The Regional Ecosystem Applied Learning Corps was established in 1997 through partnership between Southern Oregon University in Ashland, Oregon, and community and government organizations. This AmeriCorps program engages students in the classroom and through community-based projects dealing with land management issues. One student, who went to college directly from high school but left after 2 years, told us that the Regional Ecosystem Applied Learning Corps played a large part in his decision to finish his bachelor’s degree because it allowed him to connect his studies to the community while working. He noted that it was difficult to return after a 4-year break because college life felt unfamiliar to him. Providing residential learning opportunities. For those students who live on campus, some colleges are aiming to improve the learning experience by enhancing educational opportunities available to students in the residence halls. Florida State University in Tallahassee, Florida, instituted its first “living-learning community” in a residence hall in the fall of 1997 as a way to provide freshmen with a smaller community that would facilitate connections with faculty and students. An official at the institution told us that the size of the institution is an obstacle in retaining students because it is easy for students at a large research university with over 36,000 students to feel lost. Students live in a residence hall together and have to take at least one class in the building. Required weekly meetings help students navigate services available to them on the campus. Florida State reported that 5 years after the freshman class of 1997 entered the institution, 77 percent of students who participated in the first living- learning community had graduated, while the graduation rates of other on- campus students and those living off campus was around 60 percent. Promoting Scholarship. The University of Maryland-Baltimore County established the Meyerhoff Scholars Program to increase the numbers of minorities pursuing doctoral study in math, science, engineering, and computer science. In addition to the academic requirements, the scholars participate in activities designed to expose them to scientific careers, such as field trips and research experiences. University officials credit the program with much of the success the university has had with minority students—the 6-year graduation rate is higher for black students than for white. Officials attribute part of this success to the role Meyerhoff scholars play in motivating other minority students at the institution. All of the colleges and universities we visited were engaged in efforts to strengthen support of their students to ensure their academic success and retention. Colleges support their students by providing services such as academic advising, financial aid counseling, and academic support services such as tutoring. Colleges also provide supports designed to ease the transition from high school or community college to a 4-year institution. In some cases, colleges are changing how they deliver support services to ensure the needs of students are met. For example, colleges may colocate many of their support services to make it easy for students to access them. Colocating support services. During our site visits, we found that several of the institutions we visited are colocating support services to make it easier for students to access those services. In 2000, Prairie View A & M University, a historically black institution in Prairie View, Texas, implemented a comprehensive support system for freshmen. By groups of 100-125 students, freshmen are assigned to 1 of 12 academic teams. These teams consist of a professional adviser, residence hall staff, and a faculty fellow. The groups generally live together in residence halls close to all the services they might need, such as advising, academic support services such as tutoring, and financial aid counseling. Advisers work closely with the learning community manager and two community assistants, professional staff who reside in each hall. Officials think having advisers and residence hall staff working together provides many opportunities to intervene with students in time to get them connected with the services they need. Consolidating offices. Some of the institutions have also made organizational changes to ensure that most of the offices providing support to students are working together. The University of Central Florida, for example, merged the student affairs office with the enrollment management office and, according to officials, having this one office responsible for recruitment and retention ensures that a wide range of efforts can be coordinated across the cycle of student life. Improving academic advising. Most of the colleges we visited had made changes to improve academic advising services provided to students with the idea that students need consistent and accurate advisement to stay on the path to graduation. To respond to student complaints that advisers in their majors did not know enough about general graduation requirements, Florida State University centrally hired a total of 40 full-time advisers to work in the individual departments. According to one official, when individual departments hired advisers, the amount of time spent advising students declined over time as other responsibilities were assigned to those advisers. Retaining central control of the advisers ensures that advising is consistently available to students and that students receive advisement on both departmental and nondepartmental issues. Portland State University developed a system that allows students to stay abreast of where they are in terms of graduating. Advisers can use the system to help students develop a course plan and identify any remaining coursework they need for graduation. Using proactive intervention strategies. Many of the institutions we visited have approaches designed to proactively intervene with students in an effort to retain them to graduation. Several of the institutions reported that they have a warning system in place to identify students whose mid- term grades or cumulative grade point averages drop below a certain level. These students are contacted and encouraged to meet with an adviser and to make them aware of the different services available to help them. Contacting students by telephone is an approach some of the smaller institutions we visited employ to intervene with students. For example, Southern Oregon University, in Ashland, Oregon, is proactive in calling students who are not attending classes based on faculty reports. To improve its 6-year graduation rate, Coppin State College, an historically black institution, in Baltimore, Maryland, has been contacting those students who have not pre-registered for the fall semester, but are within reach of graduating within 6 years of when they started. Officials believe calling students lets them know that someone at the college is interested in them as an individual and reinforces their commitment to return. Providing academic support services. Most institutions cited academic support services as an approach to retaining students. Examples of these services include tutoring, walk-in centers that provide assistance with areas like writing and math, and programs that support special populations such as low-income and first-generation college students. Over half of the institutions we visited provide these types of services to students before they have enrolled in college to ease the transition from high school to college. In these summer bridge programs, students typically take a couple of courses, along with seminars that cover topics designed to help them succeed in college, such as time management and study skills. Generally, fewer than 100 students participate in these programs, which allows the institution to provide more intensive and personalized services. Institutions generally reported that the retention rate from the freshman to sophomore year for these students is comparable to or higher than the general population. A couple of institutions reported higher graduation rates for these students, but some officials noted that their 6-year graduation rates may lag because some of these students take longer to graduate. Easing the transition for transfer students. Some institutions are engaged in efforts to encourage and ease the transition of students from a 2-year institution to a 4-year institution. For example, the University of Central Florida has forged relationships with area community colleges and has established satellite campuses at community colleges in Orlando and the surrounding area. The university’s satellite campuses are designed for those students for whom transferring to a 4-year college may be difficult because of work and family commitments. The university has dedicated faculty and staff at these satellite campuses to ensure students receive the same education and services they would at the main campus. Advisers who travel among the satellite campuses ensure that students can obtain academic advising without traveling to the main campus. Education fosters bachelor’s degree completion through programs that provide financial and academic support to students, but little is known about the effects of these programs on college completion. Education has also established goals for increasing college completion and strengthening the accountability of colleges. While Education has some dissemination efforts—mainly through its academic support programs and through its Fund for the Improvement of Postsecondary Education program—it does not have systematic efforts in place to identify and share promising practices in the areas of retention and graduation with states and colleges that are looking for strategies to help them better retain their students. In order to help students pay for a college degree, the federal student aid programs provide billions of dollars to help students finance college with the objective that students will complete their programs. The Federal Family Education Loan Program and the William D. Ford Federal Direct Loan Program, two major federal student loan programs authorized in Title IV of the Higher Education Act, together provided student borrowers with about 9 million new loans totaling $35 billion in fiscal year 2001. The Pell Grant Program, designed to help the neediest undergraduate students, expended $8 billion to provide grants to nearly 4 million students in 2000-2001. To be eligible for these programs, students must be enrolled in a degree- or certificate-granting program. While Education has made these funds available, we reported in September 2002 that little information is available on the relative effectiveness of Title IV grants and loans in promoting postsecondary attendance, choice, and completion, or their impact on college costs. Among other things, we noted that data and methodological challenges make it difficult to isolate the impact of grants and loans. Education administers three academic support programs aimed at students who are low-income, first-generation, or disabled that have college completion as a primary goal. Student Support Services provides academic support to students at the college level, while the Upward Bound program and Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) serve students before they enter college. GEAR UP differs from Student Support Services and Upward Bound, which identify and invite individual students to participate. GEAR UP serves an entire grade of students at participating schools beginning no later than the seventh grade and follows them through high school. According to program officials, the program begins no later than the seventh grade because high school is too late to begin working with students on the preparation that leads to college. Table 1 provides an overview of the three programs. In 2001, Student Support Services added a financial assistance component as a tool to increase retention and graduation of student participants. Specifically, Student Support Services permits the use of grant aid for current Student Support Services participants who are already receiving federal Pell Grants. These funds are intended to increase retention and graduation by reducing the amount of financial need or money eligible participants have to borrow in their first 2 years of study. Student Support Services is the only program for which information on the effectiveness of the program on college completion is available. Specifically, a preliminary evaluation of the program found that participants had higher bachelor’s degree completion rates as compared to a control group of similar students not receiving those services. However, it is too early to determine the impact of the grant aid component of the program, given that it was first implemented in the 2001-2002 academic year. According to Education officials, it is also too early to determine the impact of Upward Bound and GEAR UP on college completion because students are not expected to have completed college yet. In its 2002-2007 strategic plan, Education has established goals of reducing the gaps in college participation and completion among certain student populations and increasing completion rates overall. Education has identified some strategies for meeting these goals, such as focusing on improving the K-12 system, improving the readiness of low-income and minority students for college, and improving the effectiveness of support services for low-income and minority students. The performance measure—institutional graduation rates—Education uses for assessing its progress toward the goal of increasing completion rates understates the percentage of students who actually complete bachelor’s degrees, because the measure does not account for students who transfer and complete their degrees at institutions different from where they started. However, this is the only information available on an annual basis. Other longitudinal studies, such as BPS, provide more information but are costly to administer. Education has not established other performance measures for assessing progress toward its college completion goal. Education has also established a goal for strengthening accountability of postsecondary institutions in its strategic plan. Specifically, Education is looking to ensure that colleges are graduating their students in a timely manner. Education thinks making information on student achievement and attainment available to the public is one way to hold institutions accountable for their performance because prospective students can use this information to make informed choices about where to attend college. Education has begun to discuss this issue with the higher education community and asked the community for ideas on how to strengthen accountability of postsecondary institutions. As part of its efforts, Education has held panel discussions with student financial aid experts, state officials, and business leaders, among other participants, about improving accountability. Additionally, Education is considering “performance-based grants” to provide incentives to colleges for timely graduation. In one state, however, where this was tried, there were concerns that the grant created perverse incentives to increase graduation rates, such as reducing the number of credits required for graduation. Education has some efforts to disseminate information on retention and completion; however, it does not have a systematic effort in place to identify and disseminate promising practices in these areas. Education has commissioned studies on the factors that affect college completion, and it has some evaluations on student retention—for example, one study dealing with retention strategies for students with disabilities and one on Hispanic students. It has not, however, systematically conducted research to determine what strategies have been effective in helping colleges and universities retain their students. Additionally, Education has some retention and completion dissemination efforts in place. For example, GEAR UP and TRIO grantees have the opportunity to share information with each other at annual conferences organized by private groups. Education facilitates information sharing through the TRIO Dissemination Partnership Program, which provides funding for TRIO grantees with promising practices to work with other institutions and community-based organizations that serve low-income and first-generation college students but do not have TRIO grants. The program is intended to increase the impact of TRIO programs by reaching more low-income, first-generation college students. Only a small number of grantees are disseminating information through this program—in fiscal year 2002, Education provided $3.4 million to 17 grantees. In these instances, only institutions and organizations that formally partner with grantees are likely to have the opportunity to learn about promising practices. Furthermore, promising practices that are employed by institutions outside these programs are not captured. According to agency officials, another effort in which dissemination occurs is within the Fund for the Improvement of Postsecondary Education’s Comprehensive Program. This 30-year old program seeks to help improve access and quality of postsecondary institutions by funding small promising practices grants. According to an official of the Comprehensive Program, the grants are for a 3-year period, with an average annual award amount of between $50,000 and $200,000. Last year, the program awarded $31 million for grant activities—including new awards of about $10 million. The grants cover all aspects of postsecondary improvement, and within the areas of retention and completion there are grants for, among other things, creating learning communities, reviewing remedial and introductory courses to find more effective approaches, and developing innovative methods of delivering support services. Dissemination efforts include a searchable project database on its Web site; four published volumes of promising practices (the most recent publication was in 2000); specific dissemination grants expressly aimed at replicating particularly promising practices for retention and completion; dissemination plans built into the actual grants; and annual meetings where project information is shared. Each grant has an evaluation component and the Comprehensive Program is currently being reviewed for, among other things, the efficacy of these evaluation efforts. As policymakers and others consider what is necessary to ensure accountability in higher education, the issue of how to measure performance becomes more important. While some states have used graduation rates to promote accountability, such measures may not fully reflect an institution’s performance. Graduation rates do not capture differences in mission, selectivity, programmatic offerings, or student learning outcomes. Nor do they account for another goal of higher education, increasing participation. In other words, a college or university could have a low rate of completion, but still be providing access. As policymakers consider ways to hold colleges and universities accountable for their performance, it may be possible to use multiple measures that capture an institution’s performance in regard to how well its students are educated through the use of student learning outcomes, in addition to its performance in graduating them. States, institutions of higher education, and Education are engaged in a variety of efforts to retain and graduate students. Education does have some efforts to evaluate and disseminate information related to retention and completion; however, it does not systematically identify and disseminate information on those practices that hold promise for increasing retention and graduation rates across all sectors of higher education. Such information could benefit colleges and universities that are looking for new approaches to better serve their students and seek to avoid duplicating unsuccessful efforts. As policymakers consider new ways to hold postsecondary institutions accountable for retaining and graduating their students, it becomes more important to widely disseminate promising practices in these areas. Having Education identify and disseminate promising practices in the areas of retention and graduation would help ensure that all colleges and universities have access to the same level of information and can readily draw on those practices they think might help them better serve their students. As Education moves forward with its plan to hold colleges and universities accountable for their performance in graduating their students, we recommend that the Secretary of Education consider multiple measures that would help account for other goals of higher education, such as increasing participation, as well as differences in mission, selectivity, and programmatic offerings of postsecondary institutions. Education should work with states and colleges to determine what would be most helpful for strengthening the accountability of institutions and ensuring positive outcomes for students. We also recommend that the Secretary of Education take steps to identify and disseminate information about promising practices in the areas of retention and graduation across all sectors of postsecondary education. In written comments on a draft of this report, the Department of Education agreed with our recommendations but had some concerns about certain aspects of the draft report. Education commented that we could have included trend data on, for example, whether retention and completion are increasing or decreasing. While such information might have been interesting to include, we were specifically focusing on the current status of college completion. Education suggested in its letter that we could have used its two BPS studies for such an analysis. It would not be appropriate to use these two studies for identifying trends because they covered different time periods. For example, using the first BPS study— which tracked students for 5 years—Education reported that 53 percent of students who began at a 4-year institution in 1989-90 earned a bachelor’s degree. Using the second BPS study—which tracked students for 6 years—we reported that 59 percent of students who began at a 4-year institution in 1995-96 earned a bachelor’s degree. While the increase in graduation rates might have resulted from any number of factors, the most likely reason is because an additional year was included in the calculation. The Department correctly noted that we did not address student financial aid in our analysis. We have addressed this issue in our discussion of the report’s objectives, scope, and methodology section (see app. I). With respect to Education’s comment about how the effects of being disadvantaged are accounted for in our analysis, we agree that performing a more sophisticated analysis to account for the indirect effects of being disadvantaged on completion may have yielded a more complete picture of college completion. However, our analysis was designed to provide overall descriptive information on completion rates while taking into account certain differences among students. Education had concerns that our report did not sufficiently recognize the role of its Graduation Rate Survey (GRS). While we did not directly discuss GRS, we did explain the legislative requirements regarding institutional reporting of graduation rates. Education developed GRS to help institutions comply with this requirement. Additionally, with respect to GRS, we sought clarification of Education’s statement that GRS is the basis for state efforts to track graduation rates; however, officials did not provide us with information that would support this statement. In looking at this issue, it is clear that the type of data states collect is different from the GRS data. Specifically, GRS collects only summary data from institutions on graduation rates, whereas by using data on individual students, the states we highlighted have the ability to not only calculate graduation rates but to track student transfers across the state. Furthermore, officials in two states we visited told us that they have had the ability to track individual students for over 10 years, long before information from the GRS would have been available—making it impossible for GRS to be the basis of these systems as Education suggested. We also believe that Education’s statement that we do not acknowledge the limitations of the state systems with respect to tracking student transfers is inaccurate. Our draft clearly stated that tracking is limited to student transfers within the state. Finally, with regard to Education’s concern that our report does not recognize its efforts to identify and disseminate information on retention and completion, we believe Education may have misunderstood our discussion about their efforts. We clearly highlight Education’s efforts to identify and disseminate information through studies on the factors that affect retention and completion. However, we conclude that Education does not systematically identify and disseminate information on those practices that hold promise for increasing retention and graduation rates across all sectors of higher education. Education also provided technical comments, which we incorporated where appropriate. Education’s comments appear in appendix IV. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of Education, and other interested parties. Copies will also be made available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me on (202) 512-8403. Other contacts and acknowledgments are listed in appendix V. You asked us to determine (1) the extent to which students—including those from lower socioeconomic backgrounds—who enroll in a 4-year college or university complete a bachelor’s degree and the factors that affect bachelor’s degree completion; (2) what states and 4-year colleges and universities are doing to foster bachelor’s degree completion and what is known about the effectiveness of these efforts; and (3) what the U.S. Department of Education is doing to foster bachelor’s degree completion. To determine the extent to which students—including those from lower socioeconomic backgrounds—who enroll in a 4-year college or university complete a bachelor’s degree and to identify the factors that affect bachelor’s degree completion, we analyzed Education’s 1995-96 Beginning Postsecondary Students (BPS) study. BPS is a longitudinal study that followed the retention and degree completion of students from the time they enrolled in any postsecondary institution over a 6-year period. It is based on a sample of students who were enrolled in postsecondary education for the first time in 1995-1996 and participated in Education’s 1995-96 National Postsecondary Student Aid Study (NPSAS:96). NPSAS:96 consisted of a nationally representative sample of all students enrolled in postsecondary education during the 1995-96 academic year. Information for NPSAS:96 was obtained from more than 830 postsecondary institutions for approximately 44,500 undergraduate and 11,200 graduate and first- professional students. The sample of undergraduates represented about 16.7 million students, including about 3 million first-time beginning students, who were enrolled at some time between July 1, 1995 and June 30, 1996. This BPS study began with a sample of approximately 12,000 students who were identified in NPSAS: 96 as having entered postsecondary education for the first time in 1995-1996. Education followed up with these students via computer-assisted telephone interviews in both 1998 and 2001. In addition to obtaining data from students through these interviews, data were obtained from other sources, including institutions and the Educational Testing Service, which administers standardized tests, such as the SAT I and Advanced Placement tests. Education has published reports that provide information about student enrollment and the rates of persistence, transfer, and degree attainment for students. For our purposes, we analyzed a subset of these data. We only included students who in 1995-96 were enrolled in a 4-year institution or were enrolled at another type of institution, but transferred to a 4-year institution at some point during the 6-year period. Our analysis excluded other types of students, such as community college students who did not transfer to a 4-year institution because the focus of our study was on bachelor’s degree completion. We first grouped students into three categories: those who, after 6 years (1) had completed a bachelor’s degree; (2) had not completed a bachelor’s degree, but were still enrolled in a 4-year institution; and (3) had not completed a bachelor’s degree and were no longer enrolled in a 4-year institution. We then calculated the percentage of our population in each group overall and by various characteristics relating to personal background, academic preparation and performance, college attendance and work patterns, and social integration as shown in appendix II. We focused on factors that affect whether or not students completed a bachelor’s degree by the end of the 6-year period and looked at the effect of the various characteristics mentioned above on college completion. We did not include student aid variables in our analysis. Resource constraints and the timing of the release of the BPS data made it difficult to examine the effect of student aid variables given their complexity and year-to-year variation. We first examined the independent effect of each characteristic on completion without controlling for differences among individuals. Each of these independent effects, with the exception of delaying entry into college, was statistically significant. However, because of the strong relationships among these characteristics, it is more accurate to explain the variance in completion rates using multivariate analysis, which tests the effect of each characteristic on completion while controlling for the effects of all the other characteristics. Logistic regression is a standard procedure used to estimate the effect of a characteristic on a particular outcome. The model uses odds ratios to estimate the relative likelihood of completing a bachelor’s degree within 6 years of beginning postsecondary education. The odds ratios for various characteristics are shown in appendix III. For a particular characteristic, if there were no difference between students who completed within 6 years and those who did not, the odds would be equal, and the ratio of their odds would be 1.00. The more the odds ratio differs from 1.00 in either direction, the larger the effect on completion. For example, an odds ratio below 1.00 indicates a lower likelihood of completion for a student with that particular characteristic, all else being equal. The odds ratios were generally computed in relation to a reference group; for example, if the odds ratio refers to being a dependent student, then the reference group would be independent students. Some characteristics, such as grade point average and age, are continuous in nature. In these cases, the odds ratio can be interpreted as representing the increase in the likelihood of completing college given a 1-unit increase in the continuous variable. An odds ratio that is statistically significant is denoted with the superscript a. The characteristics we used in our model explain 38 percent of the variance in bachelor’s degree completion. Because the estimates we use in this report are based on survey data, there is some sampling error associated with them. This occurs because observations are made on a sample of students rather than the entire student population. All percentage estimates we present from the BPS data have sampling errors of ±3 percentage points or less, unless otherwise noted. Furthermore, tests of statistical significance were performed using software to take into account the complex survey design and sampling errors. In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, in the reliability of data self reported by students, or the types of students who do not respond can introduce unwanted variability into the survey results. To identify what states and 4-year colleges and universities are doing to foster bachelor’s degree completion, we conducted a survey of the 59 state higher education executive officer agencies in all 50 states, the District of Columbia, and Puerto Rico and visited 5 states and 11 public colleges and universities within those states. We received completed questionnaires from 48 of the 52 states and territories we surveyed, a response rate of 92 percent. We took steps in the development of the questionnaires, the data collection, and the data editing and analysis to minimize nonsampling errors. For example, we pretested the questionnaire with 3 states to refine the survey instrument, and we called individual respondents, if necessary, to clarify answers. We conducted site visits in Florida, Maryland, Oregon, Texas, and Virginia. We chose states and colleges to visit based upon our discussions with experts and preliminary information from our survey. Additionally, we selected these states and institutions based on geographic dispersion and the variety of efforts reported to us by experts and in the survey. In each state, we met with state higher education officials to discuss college completion in general and specific efforts taking place in their states. In each of these states, we also visited colleges that were viewed by state officials as doing particularly well in working with their students to help them complete a bachelor’s degree. We met with college officials to discuss their efforts to improve retention and help students attain a bachelor’s degree. To identify what Education is doing to foster bachelor’s degree completion, we talked with Education officials and reviewed program and planning documents. We conducted our work between April 2002 and May 2003 in accordance with generally accepted government auditing practices. No bachelor’s, still enrolled at 4-year Student dependency status for federal financial aid during 1995-96. Students age 23 or younger were assumed to be dependent unless they met the independent student criteria, including being married or having legal dependents, other than a spouse. Student’s SAT I combined score. This variable was derived as either the sum of SAT I verbal and mathematics test scores or the ACT Assessment (American College Testing program) composite score converted to an estimated SAT combined score using a concordance table. The primary source of data were from a match with the SAT files from the Educational Testing Service and the ACT test files of the American College Testing programs, supplemented by postsecondary institution reported and student-reported information. The quartiles were derived from the distribution of the test scores among the BPS cohort sample students. Indicates whether student delayed enrollment in postsecondary education, as determined by receipt of a high school diploma prior to 1995 or reaching the age of 20 before December 31, 1995. Lowest quartile (<800) Middle quartiles (800-1100) Highest quartile (>1100) > 3.0 2.0 to 2.9 < 2.0 Work & attendance patterns Delayed college after high school Part-time or mix of part- and full- time Full-time Did not work Worked Less than 10 hours Between 10 and 19 hours Between 20 and 31 hours Full-time (32 hours or more) Social integration Participated in study groups denotes referent category. In addition to those named above, Rebecca Ackley, Avrum Ashery, Patrick diBattista, Kopp Michelotti, John Mingus, Luann Moy, Doug Sloane, and Wendy Turenne made important contributions to this report.
|
Because of concerns that not enough students who start college are completing a bachelor's degree, we examined (1) the extent to which students who enroll in a 4-year college complete a bachelor's degree and identify the factors that affect completion; (2) what states and 4-year colleges and universities are doing to foster bachelor's degree completion; and (3) what the Department of Education (Education) is doing to foster degree completion. More than half of all students who enrolled in a 4-year college completed a bachelor's degree within 6 years. Students were less likely to complete if neither parent had completed a degree, they were black, they worked 20 or more hours per week, or they transferred to another college. Students had a greater likelihood of completing if they were continuously enrolled, attended full-time, or had more rigorous high school curriculum. After controlling for other factors, GAO found that disadvantaged students were no less likely to complete a degree than other students. However, students from disadvantaged backgrounds are less likely to attend college in the first place. States are beginning to hold colleges accountable for retaining and graduating their students, and Education has been discussing this with the higher education community. Many states are publishing retention and graduation rates for their colleges, and some have tied performance in these areas to funding. According to Education, providing information on colleges' retention and graduation performance can help prospective students make informed decisions. However, the measure used by Education may not fully reflect an institution's performance because institutional goals and missions are not captured in the measure. In its strategic plan, Education has identified goals to reduce gaps in college completion and increase overall completion. It also has some evaluation and dissemination efforts related to retention and completion, however, these efforts do not systematically identify and disseminate promising retention and graduation practices to help states and institutions.
|
TVA was established in 1933 to provide flood control, navigation, and electric power in the Tennessee Valley region. As that area has grown, in both population and economic activity, TVA customers’ use of electricity has grown and is expected to keep growing. TVA estimates that demand for its electricity will increase about 1.7 percent annually through 2010. To meet its customers’ demand for electricity, TVA generates electricity not only at its 11 coal-fired plants (consisting of 59 generating units), but also three nuclear power plants (five units), 29 hydroelectric dams (109 units), one pumped storage site (four units), and five sites with combustion turbines (64 units). (See fig. 1.) It also generates power from landfill gas, solar, and wind projects, and it purchases power from others. From the 30,365 megawatts of generating capacity available from these sources, TVA generated about 156 billion kilowatt-hours of power in fiscal year 2001. It also purchased roughly 9.9 billion kilowatt-hours of power. Of its power supply, 60 percent came from coal, 27 percent from nuclear, 6 percent each from hydropower and power purchases, and 1 percent from combustion turbines. (See fig. 2.) The share of electricity generated by burning fossil fuels has implications for the environment. Burning fossil fuels produces SO2 and NOx gases, and the Environmental Protection Agency estimates that fossil fuel burning from utilities accounted for 67 percent of the nation’s SO2 emissions and 27 percent of its NOx emissions in 1999. Both gases can be transported over long distances following the patterns of air movements. SO2 emissions contribute to the production of airborne sulfate particles that contribute to acid rain, which can harm waters, forests, and materials. In addition, these particles can block the transmission of light, resulting in haze in urban areas and the degradation of scenic vistas in many national parks. NOx is also a source of acid rain and, through chemical reactions in the atmosphere with other pollutants, leads to the formation of ground- level ozone, the principal component of smog. Smog can cause chronic human health effects, particularly respiratory problems, as well as harming plants. TVA’s choices in generating power are constrained by laws, regulations, and internal policies. For example, the Clean Air Act, as amended, limits emissions of SO2 and NOx from coal-fired power plants. Moreover, the Tennessee Valley Authority Act, which established TVA, provides that the generation of power from hydroelectric units is a lower priority than navigation and flood control. Finally, an internal TVA policy limits the time period when TVA can draw down the lakes (reservoirs) that it manages for flood control and in the process generate hydropower. To meet its customers’ increasing demand for electricity, TVA can upgrade its existing plants, construct new plants, purchase power from others, or— as an alternative to finding additional supply sources—provide incentives to its customers—called “demand-side management”—to reduce or shift their demand for electricity. The Department of Energy defines demand-side management as actions taken on the customer’s side of the meter to change the amount or timing of energy consumption and identifies several types of programs. Energy efficiency programs involve the use of technologies that reduce total energy use, during both peak and off-peak periods, such as energy- efficient lighting, appliances, and building equipment. Peak load reduction programs focus on reducing load during periods of peak power consumption on a utility’s system. These programs can involve the use of technologies that smooth out the peaks (called “peak shaving”) in energy demand. Such technologies include control systems, such as switches attached to heating, cooling, and ventilation systems that allow the systems to be turned off during peak load times. They can also include rate-schedule programs where utilities structure their rates to encourage customers to modify their pattern of energy use. According to the Department, utility funding for demand-side management programs in the United States declined nationally between 1994 and 1998, due in large part to increased competition and uncertainties regarding electricity deregulation. Funding for these programs leveled out in 1999 and slightly increased in 2000 as concerns over electric supply shortages in California led many utilities and state regulatory agencies to increase their emphasis on demand-side management. TVA can benefit from demand-side management, especially reducing peak loads, because electricity use varies substantially within a 24-hour period. For example, on August 17, 2000, an unusually hot day, TVA customers used about 67 percent more electricity during the hour of highest consumption (4 p.m.) than the hour of lowest consumption (5 a.m.). TVA used its various energy sources in a sequenced manner to supply this electricity. (See fig. 3.) Nuclear facilities provided power steadily throughout the day, while coal facilities provided power fairly consistently—somewhat lower at night and higher during the day. As demand increased during the afternoon, TVA increased the use of hydroelectric power and it purchased power from other utilities. Finally, during the hottest, mid-day hours, TVA used its combustion turbines. Even though TVA’s customers used more electricity on that day than on any other in its history, TVA officials told us that the sequencing of power sources was standard practice. TVA projects that its SO2 and NOx emissions in 2005 will fall 28 percent and 25 percent, respectively, below its 2000 levels, despite a planned addition of 3,086 megawatts of generating capacity. TVA projects that its SO2 emissions will decline as it increasingly uses coal with a lower sulfur content at some of its coal-burning power plants. TVA projects that its NOx emissions will decline as it installs more control devices at its coal- burning plants. Moreover, TVA plans to increase its generating capacity largely from sources—other than coal-burning plants—that generally emit less of these pollutants. Aside from constructing new generating capacity, TVA also plans to continue purchasing peak power in a range between 1,500 and 3,000 megawatts annually during the 2001 to 2005 period. (The emissions associated with purchased power—equivalent to 6 percent of TVA’s power supply—are not included in TVA’s emissions data.) Finally, TVA estimates that its demand-side management programs will offset new peak demand by 396 megawatts between fiscal year 2001 and 2005. TVA projects that the SO2 emissions from its coal-burning plants will decline from 727,000 tons in 2000 to 498,000 tons in 2003, before rising to 525,000 tons in 2005. (See fig. 4.) According to a TVA official, the expected increase after 2003 is directly related to planned increases in generating capacity at its coal plants. TVA attributes overall projected declines in SO2 emissions to the continued switching to coal with lower sulfur content at three plants. Specifically, the lower sulfur coal is 0.5 to 0.6 percent sulfur, about half the sulfur content of the coal that is currently burned at these units. Moreover, according to the same official, even though SO2 emissions will increase slightly from 2003 to 2005, the average emissions rate will remain unchanged during this period. Beyond 2005, TVA has committed to further reduce SO2 emissions. In October 2001, TVA announced that it would install five additional scrubbers to limit SO2 emissions at its coal-burning plants between 2006 and 2010. According to a senior TVA official, annual SO2 emissions from TVA coal-burning plants are likely to fall to around 400,000 tons by 2010. TVA’s projections show a steady decline in its NOx emissions, from 287,000 tons in 2000 to 216,000 tons in 2005. TVA attributes this projected decline to the planned installation of “selective catalytic reduction” systems—which remove nitrogen oxides from the exhaust gases—at some of its generating units at its coal-burning plants. TVA’s first such system began operating in 2000. According to TVA, by spring 2005 it will have installed 18 of these systems, or similar such systems, which will control NOx emissions on 25 of its 59 generating units. Moreover, TVA expects to make even sharper cuts in its NOx emissions during the summer “ozone season” that extends from May through October. Ozone levels are higher during these months because emissions levels of NOx and natural hydrocarbons are higher, and there is more sunlight, all of which are needed for the formation of ozone, as well as higher temperatures, which speed up the chemical reactions. TVA expects its ozone-season NOx emissions to fall from 118,000 tons in 2000 to 43,000 tons in 2005. (Additional information on TVA’s SO2 and NOx emissions from 1974 through 2010 is included in apps. I and II of this report, respectively.) Of the 3,086 megawatts of additional capacity that TVA plans to add between 2001 and 2005, more than half (1,658 megawatts) will come from “peaking” units, which are used only during the parts of the day when demand spikes. The rest of the new capacity (1,428 megawatts) will be base load units, which are used throughout the day. Most of this increased capacity will be generated through hydropower, natural gas, nuclear power, and other noncoal sources. (See fig. 5.) To increase its base load capacity, in December 2001 TVA began purchasing power from a new 440-megawatt coal-burning lignite power plant in Mississippi. Although TVA does not own the plant, it purchases all of the facility’s output. To further increase its base load capacity, TVA plans both to upgrade existing units and to build new capacity: constructing a 500-megawatt, natural gas-fired, combined cycle plant in Tennessee, to begin operating in 2003; increasing the base load generating output at the Browns Ferry, Alabama, and Sequoyah, Tennessee, nuclear plants, between 2003 and 2005, by 290 megawatts. increasing turbine efficiency at three of its coal-burning plants between 2001 and 2005, adding 153 megawatts of capacity; and increasing its acquisition of “green power” (from landfill gas, solar, and wind sources) to 45 megawatts in 2005. To increase its own peak load capacity, TVA plans to add 1,336 megawatts of additional combustion turbine capacity, primarily in 2001 and 2002, at facilities in Mississippi and Tennessee; 310 megawatts of capacity between 2001 and 2005 by continuing to modernize its hydropower and pumped storage facilities; and 12 megawatts of peak capacity by constructing a battery storage plant in Columbus, Mississippi. Finally, TVA plans to meet future needs by continuing to purchase power to meet peak-time demand. These purchases are expected to remain in the range between 1,500 and 3,000 megawatts through 2005. Between fiscal years 1996 and 2000, demand-side management programs reduced TVA’s peak load by 204 megawatts (about 41 megawatts a year, or roughly equivalent to 1/10th of 1 percent of its overall capacity). Two programs accounted for these savings: the Energy Right Program, which promotes the installation of energy-efficient heat pumps and other electric appliances; and the Cycle and Save Program, which gives residential customers a bill credit for allowing TVA to switch off their water heaters and air conditioners during peak demand periods. TVA reported no savings from its rate-schedule program for commercial and industrial customers. Due in large part to a new program introduced in mid-2001, TVA plans to achieve a cumulative peak load reduction of 396 megawatts for the fiscal year 2001 through 2005 period (about 80 megawatts a year). Finally, TVA is studying ways to expand its demand-side management programs and increase their impact. Each year tens of thousands of customers participate in TVA’s demand-side management programs. Such programs involve all major types of customers—commercial, industrial, and residential. Moreover, they are aimed at reducing electricity use both year-round and during peak demand periods. According to TVA, energy-efficiency and load-reduction programs saved 97 megawatts in fiscal year 2000, and a cumulative total of 204 megawatts from fiscal years 1996 through 2000. Furthermore, TVA expects these programs to result in an additional 120 megawatts of savings in fiscal year 2005 and a cumulative total of 396 megawatts from fiscal year 2001 through 2005. According to TVA, its peak load reduction impacts are to increase from 0.7 percent of peak load in fiscal year 2000 to 2 percent of peak load in fiscal year 2005. TVA attributes 74 percent of the energy savings for the fiscal year period 1996-2000 to its Energy Right Program. This program offers incentives to encourage contractors, developers, and homeowners to install energy-efficient electric appliances, such as heat pumps and water heaters. (TVA attributes the remaining 26 percent to a direct load control program, discussed below.) The Energy Right Program includes components for new homes, manufactured homes, heat pumps in existing homes, and self-audits by residential customers. (See fig. 6.) The Energy Right Program, which has the greatest number of participants among TVA’s demand-side management programs, is designed to reduce residential customers’ consumption both year-round and during peak- demand times through increases in energy efficiency. In fiscal year 2000, 37,182 residential customers became participants in the program, a substantial increase from the 15,481 residential customers who became participants in fiscal year 1996. TVA anticipates that an additional 58,900 new participants will join the program in fiscal year 2005. Similarly, TVA expects the program’s impacts to increase by fiscal year 2005. The reduction in year-round consumption, which stood at 23,565 megawatt-hours in fiscal year 1996 and 54,129 megawatt-hours in fiscal year 2000, is expected to reach 83,726 megawatt-hours in fiscal year 2005. Also, the reduction in peak load demand, which was 19 megawatts in fiscal year 1996 and 43 megawatts in fiscal year 2000, is expected to reach 62 megawatts in fiscal year 2005. According to TVA’s most recent estimate, the program’s overall effect on peak demand varied by season. For example, in 1996, the program resulted in an annual decrease of 19.4 megawatts in peak summer demand and an annual increase of 31.3 megawatts in winter demand, by providing incentives to developers and others to install appliances powered by electricity rather than natural gas or another energy source. According to TVA, such programs help improve the overall efficiency of its system and ultimately result in lower costs to consumers. TVA’s Cycle and Save program allows TVA to turn off certain appliances in participating households for short periods when demand is high. TVA estimates that its Cycle and Save Program for residential customers accounted for about 26 percent of the savings realized from fiscal year 1996 through 2000. On an annual basis, the program’s savings outpaced those attributed to the Energy Right Program. However, because the Cycle and Save Program’s benefits are not cumulative, the Energy Right Program accounted for 74 percent of the cumulative savings. TVA reduced the incentives it offered distributors to participate in the Cycle and Save Program and later restricted the number of distributors that may participate in the program. According to the program manager, TVA determined that the Cycle and Save Program was not cost-effective and allowed it to decline over time. As a result, peak-time consumption was presumably higher than it would have been if TVA had not taken these actions. For example, TVA shifted to the distributors the cost of purchasing, installing, and maintaining the switches that allow certain appliances to be cycled-off. While TVA initially paid for all switches installed on appliances, including air conditioners, standard water heaters, and storage water heaters, it currently pays for the switches only on storage water heaters. TVA pays only for these switches because storage water heaters are cycled-off for a longer period of time than air conditioners and standard water heaters, thereby providing enough peak load savings to justify their costs. Between 1992 and 1998, TVA reduced the amount of the monthly credit provided to participating distributors. It reduced the credit for storage water heaters from $5.70 to $5.50; for standard water heaters from $5.25 to $4.75; and for air conditioners and heat pumps from $1.40 to $1.15 (dollar figures not adjusted for inflation). TVA estimates that 30 percent of the radio-controlled switches that allow the water heaters or air conditioners to be cycled off are inoperable. TVA currently allows only 14 of its 158 distributors to participate in the program. According to the TVA program manager, as many as 30 distributors participated in the mid-1980s, but this number declined significantly after TVA eliminated incentives for distributors to participate in the program. The manager further noted that despite TVA’s changes in the program, several non-participating TVA distributors continue to express interest in participating in the program. TVA offers rate discounts to its commercial and industrial customers who give TVA permission to interrupt their power during periods of peak demand (called “interruptible power contracts”). According to TVA, 51 of the 62 large federal and industrial customers it serves directly have such contracts, as do 345 of its distributor-served commercial and industrial customers. TVA estimates that these contracts give it the ability to curtail up to 1,800 megawatts of power at times of peak demand. However, TVA seldom uses this tool. Between 1996 and 2000 TVA curtailed power under these contracts on only three occasions, and did not measure the savings it accrued. Moreover, according to TVA, the customers enrolled in these programs may reduce their consumption by 300 or more megawatts in response to price increases. TVA projects that its demand-side management programs will save nearly twice as much in the fiscal year 2001 through 2005 period as they did in the previous 5-fiscal year period. Specifically, it projects cumulative savings of 396 megawatts through fiscal year 2005, in contrast to the 204 megawatts saved through fiscal year 2000. The higher level of savings stems from several factors: increased participation in its long-standing programs, and the introduction of a new “buyback” program for large commercial and industrial customers in June 2001. Specifically, this program allows TVA to buy power back from its large commercial and industrial customers whenever it is economical for these customers to curtail their power usage or when they can generate power from on-site sources. TVA expects that the program will reduce peak demand by an average of about 27 megawatts annually between fiscal years 2001 and 2005, or 133 megawatts overall for the fiscal year 2001 through 2005 period. In October 2001, TVA began a study of its demand-side programs, which it expects to complete in early 2002. According to the TVA project manager, the study’s goal is to identify ways to increase cumulative savings to 500 megawatts by the end of fiscal year 2003—75 more megawatts than the current estimate of 425 megawatts for fiscal year 2003. The study will consider a range of options, including real-time pricing, rebates to consumers who purchase energy-efficient appliances (such as air conditioners and refrigerators), and incentives for industrial and commercial customers to install high-efficiency lighting. Some comparable utilities have gone further than TVA in implementing demand-side management programs that are similar to TVA’s programs and in operating other programs. In an effort to determine how other utilities are approaching demand-side management, we contacted four utilities with such programs: The Bonneville Power Administration, which sells wholesale electricity, primarily generated by hydropower, in Idaho, Oregon, Washington, and a portion of Montana; Florida Power and Light, a utility serving a large residential population in Georgia Power, which serves retail customers in Georgia; and Puget Sound Energy, which sells electricity to retail consumers in Washington state. The utilities we selected serve different sections of the country and face differing regulatory environments. For example, Florida Power operates in a regulated environment and recovers expenses from an energy conservation cost recovery plan run by the Florida Public Service Commission. As compensation for demand-side management expenditures, Florida Power and Light requested reimbursement of more than $158 million from the Commission in 2000. Unlike TVA, the Bonneville Power Administration—an agency of the U.S. Department of Energy—offers a credit program to wholesale power customers who take action to further conservation and renewable resource development in the region. Bonneville offers utilities and directly served customers a rate reduction of one-twentieth of a cent per kilowatt- hour to develop their own conservation and renewables programs. Like TVA, both Florida Power and Light and Georgia Power have load management programs for residential customers. Florida Power residential customers receive a bill credit if they allow the utility to switch off their air conditioners, hot water heaters, and pool pumps at peak times. The utility estimates that about 657,000 (about 19 percent) of its residential customers participate in its load control program, as contrasted with about 2 percent of TVA’s residential customers. In addition, Florida Power has 14,285 businesses enrolled in a similar program for air conditioners. Florida Power and Light estimated that peak load savings from its program amounted to 941 megawatts in 2000. Similarly, Georgia Power operates a program that cycles off power to residential air conditioners. The program, begun in 1997, is projected to reduce peak demand by 44 megawatts in 2004. Also, like TVA, Georgia Power has 500 megawatts of interruptible power available. Though interruptions are rare, Georgia Power uses an average of 350 megawatts when necessary. In the summer of 2000, it interrupted power for a total of 12 hours over 3 days. Georgia Power and Puget Sound Energy have experience with time-of-use pricing programs—Georgia Power involves commercial and industrial customers, while Puget Sound Energy involves residential customers. Georgia Power started a real-time pricing program in 1992, and it has become the largest such program in the country, according to the Electric Power Research Institute. About 1,600 large commercial and industrial customers, or about 25 percent of such customers, participate in the program. In response to peak demands for power, Georgia Power can initiate a pricing “event.” The company uses e-mail to notify participating customers, a day or an hour ahead of time, that their prices during the event will be based on the marginal cost of producing power. During such an event, prices have two components: (1) a baseline charge and (2) either a marginal charge or credit, depending on how the customer’s energy use varies from its historic energy use. In August 1999, when prices spiked to more than $1 per kilowatt-hour (15 times the average price), customers reduced their demand by 800 megawatts. During typical peak events, customers reduce demand by an average of 300 megawatts. During 2001, Puget Sound Energy piloted a time-of-use program for about 300,000 of its 1.4 million residential customers in order to get them to use less electricity at peak demand times. It established different rates for 4 time periods during the day—from a low of 6.5 cents per kilowatt-hour at night to 9 cents during the mid-morning and evening hours. Moreover, Puget Sound Energy’s state-of-the-art automated meter reading system allowed its customers to log on to its website and track their energy consumption throughout the day. The utility found that the customers involved in the “informational pilot program” (billed on the standard rate but provided with consumption information via the internet), on average, shifted about 5 percent of their consumption from peak to off-peak hours. Preliminary results indicate that those actually being billed on the time-of- use rate reduced their overall consumption by 6 percent. Subject to state regulatory approval, Puget Sound Energy said it plans to introduce the program to all of its residential customers in 2002. Bonneville Power has a demand exchange program for large industrial and commercial customers who are willing to curtail their consumption depending on electricity prices. Program participants are notified via the internet of hourly, 1-day-ahead, and 2-day-ahead prices that are associated with peak load events. Customers may respond, via computer, noting their willingness to curtail their use of power at the posted prices. While TVA plans to substantially reduce its SO2 and NOx emissions by three means—installing control devices, using lower-sulfur coal, and relying largely on noncoal sources for additional capacity—it could reduce emissions even more by more aggressively pursuing an existing fourth option—demand-side management. However, TVA’s demand-side management programs are generally limited in scope, and they contribute little to moderating future demand. As a result, to meet its customers’ growing demand for power, TVA will need to generate more power itself, or purchase more power from others, which will likely produce more air emissions. In contrast, certain other utilities have realized greater savings from their demand-side management programs. TVA’s recently commissioned study of opportunities to increase the short-term impact of its demand-side management programs may serve as a useful first step. However, TVA still needs to assess the potential contributions of demand- side management over a longer time horizon. TVA should reevaluate the design of its current programs and evaluate opportunities for adopting proven ideas from other utilities. Accordingly, we are recommending that the TVA Chairman (1) evaluate the structure and effectiveness of its current programs; (2) review the longer-term potential applicability of other programs to its power system; and (3), as appropriate, implement demand-side management practices. We provided a draft of this report to TVA for review and comment, and received a letter from the Interim Vice President for Governmental Relations (see app. III). He said that TVA was evaluating its own demand- side management programs, including identifying potential opportunities, researching programs offered by other utilities, and analyzing the cost effectiveness of potential programs, all of which are consistent with our recommendations. In addition, he provided technical comments, which we have incorporated in the report as appropriate. To determine TVA’s plans for meeting future power demands for electricity while minimizing emissions of SO2 and NOx and to describe the scope and impact of TVA’s demand-side management efforts, we interviewed officials from TVA and reviewed studies and other documents prepared by the Department of Energy’s Energy Information Administration and TVA. In addition, we interviewed three TVA distributors that participate in TVA’s demand-side management programs in order to hear their opinions on the programs’ strengths and weaknesses. In addition, we contacted experts at five non-governmental organizations—the American Council for an Energy Efficient Economy, Edison Electric Institute, Electric Power Research Institute, Regulatory Assistance Project, and Southern Alliance for Clean Energy. To describe the demand-side activities of other utilities, we contacted officials from, and reviewed studies and other documents prepared by, the Edison Electric Institute, the Electric Power Research Institute, and four utilities: the Bonneville Power Administration, Florida Power and Light, Georgia Power, and Puget Sound Energy. We selected these utilities for their geographic dispersion, diverse customers bases, and reputation for undertaking noteworthy demand-side management efforts. These utilities are not necessarily representative of other utilities in this country. We conducted our review between July 2001 and February 2002 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report for 14 days after the date of this letter. At that time, we will send copies of this report to the Ranking Minority Member, Subcommittee on Legislative, House of Representatives Committee on Appropriations; Senator Fred Thompson; Representative Zach Wamp; the TVA Chairman; the EPA Administrator; and other interested parties. We will make copies available to others upon request. Questions about this report should be directed to me or David Marwick at (202) 512-3841. Key contributors to this report were Richard A. Frankel, Timothy Minelli, and Richard Slade. The planned reduction in TVA’s SO2 emissions from 2001 to 2010 continues the trend over the previous quarter-century, as shown in figure 7. TVA’s emissions dropped from 2,212,000 tons in 1974 to 727,000 tons in 2000, and are expected to drop to 406,000 tons in 2010. This represents an 82-percent decrease over the entire period. TVA’s two primary means for reducing SO2 emissions from its coal- burning plants are (1) installing scrubbers that remove sulfur from smokestack gases and (2) decreasing the sulfur content of the coal it burns to generate electricity. Between 1974 and 1995, when TVA reduced its annual emissions from 2,212,000 tons to 876,000 tons, there were notable decreases in 1978, 1982, 1984, and 1995. These decreases reflect the installation of scrubbers at TVA’s two largest plants (Cumberland in Tennessee and Paradise in Kentucky), as well as at Widow’s Creek in Alabama, in those years. Between 1995 and 2000, TVA further reduced its annual emissions to 727,000 tons, without adding any more scrubbers, by switching to lower- sulfur coal. Over those 5 years, TVA lowered the average sulfur content of its coal purchases from 2.26 percent to 1.88 percent. This decrease of about 17 percent is roughly equal to the proportional decline of SO2 emissions during that period. Between 2000 and 2010, TVA plans to use both strategies to further reduce its SO2 emissions. Through 2005, TVA plans to reduce its annual emissions to 525,000 tons, by continuing to switch to lower-sulfur coal. Beyond 2005, TVA plans to further reduce its emissions, by installing five scrubbers on 12 units at four coal-fired plants. This will increase to 60 percent the share of TVA’s coal-fired capacity operating with scrubbers. According to TVA, in 2001, TVA emitted SO2 at a rate of 1.18 pounds per million British thermal units of fuel energy. TVA expects the rate will decline to below 0.8 pounds per million units in 2010. This remains above the rate that plants considered new sources are required to meet, which is about 0.3 pounds per million British thermal units. TVA’s planned reduction in NOx emissions at its coal plants from 2001 to 2010 continues the trend that began after 1995, when emissions reached 530,000 tons. (See fig. 8.) In that year, phase one of the Acid Rain Program (authorized by title IV of the Clean Air Act Amendments of 1990) started and TVA began modifying its coal plants to reduce their NOx emissions. By 2000, when the program’s second phase began, TVA’s annual NOx emissions had fallen to 285,000 tons. In that year, TVA’s first selective catalytic reduction system went into operation at its Paradise, Kentucky, coal plant. According to TVA, by spring 2005, it will have installed 18 selective catalytic reduction systems, or similar systems, on 25 generating units at 7 of its coal plants. TVA projects that, once these systems are installed, the NOx emissions from its coal plants will fall to 215,000 tons in 2005. NOx emissions shown in figure 8 reflect no additional controls beyond the 18 systems.
|
The Tennessee Valley Authority (TVA) relied on its 11 coal-burning plants to supply 60 percent of its electric power in fiscal year 2001. These plants account for almost all of TVA's emissions of two key air pollutants--sulfur dioxide (SO2), which has been linked to reduced visibility, and nitrogen oxides (NOx), which contribute to the formation of harmful ozone. To meet an increase in demand of 1.7 percent annually through 2010, TVA estimates that it will need to expand its current generating capacity of 30,365 megawatts by 500 megawatts annually. Building new generating capacity can produce more emissions, which raises environment concerns. To lessen the need for new capacity, TVA and other electricity suppliers promote the efficient use of electricity through "demand-side management" programs, which seek to reduce the amount of energy consumed or to change the time of day when it is consumed. Even though TVA intends to increase its capacity to generate electricity through 2005, it also expects to reduce its SO2 and NOx emissions during the same time period, primarily by burning lower-sulfur coal, installing devices to control emissions at its existing plants, and relying on fuels other than coal for new capacity. Although TVA's demand-side management programs have allowed customers to cut their electrical consumption, these programs have made only modest contributions to reducing peak-time demand. TVA has limited the scope of its key program to reduce peak-time consumption by residential customers because TVA believes the program is not cost-effective. TVA projects that its demand-side programs will produce nearly twice as much in savings between 2001 and 2005 as was achieved in the previous five years. Other large utilities have more fully implemented the types of programs that TVA now has in place and have also implemented a greater array of demand-side management tools. These programs have involved a much higher proportion of their residential customers and established different prices for electricity used during different times of the day.
|
WMATA was created in 1967 by an interstate compact that resulted from the enactment of identical legislation by Virginia, Maryland, and the District of Columbia, with the consent of Congress. The Compact also created the Washington Metropolitan Area Transit Zone, shown in figure 1, where WMATA provides its transit services, including the District of Columbia; the cities of Alexandria, Falls Church and Fairfax; the Virginia counties of Arlington, Fairfax and Loudoun; and the Maryland counties of Montgomery and Prince George’s. WMATA is unusual among transit agencies in that it was created by an interstate compact; moreover, it has unique demands placed on it because it serves the national capital area and the federal government, as we discussed in a July 2005 testimony. WMATA provides transportation to and from work for a substantial portion of the federal workforce, and federal employees’ use of WMATA’s services is encouraged by General Services Administration guidelines that instruct federal agencies to locate their facilities near mass transit stops whenever possible. WMATA also accommodates increased passenger loads and extends its operating hours during events related to the federal government’s presence in Washington, D.C., such as presidential inaugurations and funerals, and celebrations and demonstrations on the National Mall. WMATA’s Metro Transit Police assists federal law enforcement agencies such as the Secret Service by making available its officers who have expertise in areas such as explosives detection and civil disturbance management. WMATA also provides Metrobuses to be used as a security perimeter on the grounds of the U.S. Capitol and other public places for events such as inaugurations and State of the Union addresses. WMATA began building the Metrorail system in 1969, acquired four regional bus systems in 1973, and began the first phase of Metrorail operations in 1976. In January 2001, WMATA completed the originally planned 103-mile Metrorail system, which included 83 rail stations on five rail lines. As of March 2006, the transit system encompasses (1) the Metrorail subway system, which now has 86 Metrorail stations on five rail lines and a fleet of about 948 railcars; (2) the Metrobus system, which has a fleet of about 1,451 buses serving 340 routes; and (3) the MetroAccess ADA complementary paratransit system, which provides specialized transportation services, as required by law, to persons with disabilities who are certified as being unable to access WMATA’s fixed-route transit system. WMATA funds its operations through a combination of revenues from passenger fares, nonfare revenues such as parking and advertising fees, and payments from state and local governments. It funds its capital program primarily through grants from the federal government and contributions from state and local governments, and by borrowing from the private sector through the issuance of bonds. WMATA’s funding sources for operations and capital are shown in figure 2. The operating costs for bus, rail, and paratransit that are allocated to the Compact jurisdictions are determined by a set of formulas that take into consideration factors such as population, ridership, number of Metrorail stations, and miles of bus routes. The formulas for determining capital cost allocation—other than for extension projects, which are paid for by the sponsoring jurisdiction—are based on the amount that the jurisdictions pay for operating costs. Under these formulas, jurisdictions with higher populations and service levels (indicated by factors such as the number of Metrorail stations and miles of bus routes) generally pay more than jurisdictions with smaller populations and lower service levels. The operating subsidy and capital program payments for 2006 as determined by the formulas are shown in figure 3. The Compact jurisdictions of the District of Columbia, Maryland, and Virginia vary in the sources they use for payments to WMATA: District of Columbia. Payments to WMATA are provided by the District’s Department of Transportation every quarter. Operating costs are paid for from the District of Columbia’s general fund and capital costs are funded by general obligation bonds. Maryland. Payments to WMATA for Montgomery and Prince George’s counties are made from the Maryland Transportation Trust Fund. The trust fund’s revenue sources include a gas tax, vehicle title tax, and other motor vehicle taxes and fees, along with other sources such as federal aid. Trust fund revenues are also used for operating and capital expenses for various modes of transportation in the state including transit, ports, and aviation, as well as for local road construction. Maryland is required by state law to make payments for the share of WMATA’s operating expenses, capital equipment replacement, and debt service for which Montgomery and Prince George’s Counties are responsible. Virginia. The individual cities and counties are responsible for making payments to WMATA. A portion of these localities’ payments are made through the Northern Virginia Transportation Commission (NVTC). NVTC holds, in trust, funds from a variety of sources that are used to pay for its members’ public transit systems—including WMATA and local bus systems such as the Fairfax Connector and Alexandria’s DASH bus. Sources include a 2 percent Northern Virginia retail motor vehicle fuel tax and state sources such as transit assistance grants and state bonds issued for WMATA. NVTC sources accounted for about two-thirds of payments to WMATA from Northern Virginia counties and cities in fiscal year 2006. The portion of the localities’ obligation to WMATA that is not covered by NVTC sources is usually paid directly by the localities from their general funds. In 1980, federal legislation required that for WMATA to receive additional funding for construction of the Metrorail system, the WMATA Compact jurisdictions had to demonstrate that they had “stable and reliable” sources of revenue sufficient to pay for the principal and interest on bonds and the local share of the operating and maintenance costs of the transit system. The District of Columbia, Maryland, and Virginia took the following actions to comply with the requirement: District of Columbia. The city adopted a law in 1982 to earmark funds for WMATA by establishing a Metrorail/Metrobus account within its general fund. The account was supported by earmarking existing revenues that came from sources receiving direct or indirect benefits from mass transit, including sales taxes on hotels, meals, and gasoline, as well as vehicle registration fees and parking meter fees. The earmarked revenues were sufficient to cover the District of Columbia’s share of WMATA’s operating, debt service, and capital expenses. This account is no longer the source of WMATA payments. As described above, the District of Columbia now provides payments to WMATA from its general revenue fund and general obligation bonds. Maryland. The state enacted legislation in 1980 to require the Maryland Transportation Trust Fund to assume a portion of the costs WMATA allocated to Montgomery and Prince George’s Counties. The legislation also provided the trust fund with new sources of revenue, including motor vehicle fuel taxes, a portion of the corporate income tax, and all revenues of the state motor vehicle administration. The trust fund was used to pay all of Montgomery and Prince George’s Counties’ share of WMATA’s capital costs, and 75 percent of the counties’ share of operating costs and debt service. Montgomery County provided for the balance of its obligation to WMATA through a property tax earmarked for mass transit, and Prince George’s County met the remainder of its obligation by establishing the Mass Transit Special Revenue Fund and earmarking revenues from the state real property tax grant program in the event that county appropriations to the fund fell short. State legislation in 1992 and 1998 made the state’s transportation trust fund the source of all payments to WMATA. Virginia. In 1980, the state enacted a 2 percent sales tax on the retail price of gasoline within the Northern Virginia counties and cities in the WMATA service area and dedicated the proceeds of the new tax to WMATA, effective in July 1982. The state also increased its biennial appropriation to NVTC, increasing the amount of state money available for payment to WMATA. At the same time, the Northern Virginia counties and cities enacted local ordinances stating their intention to fund WMATA’s debt service and operating assistance on an annual basis and designating their general fund revenues as the source of funding for what the gasoline tax and state aid did not cover. In 2005, the Metro Funding Panel estimated that under its current revenue structure, WMATA would have a total budgetary shortfall of $2.4 billion during fiscal years 2006 through 2015 if it went forward with the projects remaining in its 10-year capital improvement plan, not including those that involved expanding the current system. The panel’s report noted that WMATA—unlike almost all other large transit systems—does not have a substantial dedicated source of revenue, such as a local sales tax, whose receipts are directed to the transit authority. As a result, the panel concluded that the Washington, D.C., region needs to develop a dedicated source of funding for WMATA, and recommended specifically that a regionwide sales tax be implemented. In the course of its work, the panel analyzed a number of revenue options for dedicated funding for WMATA, including estimating the tax or fee rate that would be required to raise sufficient revenue to address the projected shortfall. In our July 2005 testimony before the House Committee on Government Reform, we stated that the actual projected shortfall could, in fact, be much greater because the Metro Funding Panel did not include in its estimate costs associated with providing paratransit service, which is required by the Americans with Disabilities Act. These costs are significant; in fact, the panel estimated that these services could result in an additional shortfall for WMATA of about $1.1 billion over the 10-year period. The Brookings Institution in June 2004 issued a report that similarly concluded that WMATA’s lack of dedicated revenues makes its core funding uniquely vulnerable and at risk as WMATA’s member jurisdictions struggle with their own fiscal difficulties. The Brookings report also concluded that the Washington, D.C., region needs to develop a dedicated source of revenue. In July 2005, Representative Tom Davis, Chairman of the House Committee on Government Reform, introduced the National Capital Transportation Amendments Act of 2005 (H.R. 3496), which would authorize $1.5 billion to WMATA over 10 years for financing the capital and preventive maintenance projects included in WMATA’s Capital Improvement Program. The bill states that WMATA is essential for the effective functioning of the federal government and for the orderly movement of people during major events and times of regional or national emergency, and that additional funding is necessary to ensure the transit system’s continued functionality. H.R. 3496 does not appropriate funds. For WMATA to receive the funding authorized in H.R. 3496, Congress must pass additional legislation appropriating funds. H.R. 3496, as amended by the House Committee on Government Reform, states that to be eligible for the additional funding, WMATA must amend the WMATA Compact to require that all payments to WMATA from the Compact jurisdictions be derived from an Office of Inspector General be established at WMATA, and the WMATA Board of Directors be expanded to include four additional members appointed by the federal government, two of whom are voting and two of whom are nonvoting. Using the definition in FTA’s National Transit Database (NTD), we identified the following characteristics of dedicated funding: (1) specific revenue sources are designated, (2) the revenue is designated to be provided to the transit agency, and (3) the revenue is not subject to appropriations. Similarly, H.R. 3496 states that dedicated funding is any source of funding that is earmarked and required under state or local law to be used for payments to WMATA. In the Washington, D.C., region, legislators in the District of Columbia, Maryland, and Virginia proposed bills to provide dedicated funding to WMATA—described in detail later in this report—that demonstrate some of these characteristics, as follows: Legislation in the District of Columbia, which was enacted in April 2006, would set aside a portion of the sales tax revenue to be dedicated solely for WMATA. Under this legislation, which must be approved by Congress before taking effect, the provision of dedicated funds to WMATA would be subject to annual appropriations by Congress, but not by the District of Columbia. Legislation introduced in the Maryland General Assembly, which was not enacted during the 2006 legislative session, would have set aside a percentage of the sales tax revenue, but the tax proceeds would have been dedicated to WMATA and other transit programs and expenses in the state, and also would have been subject to appropriations. Legislation proposed in the Virginia General Assembly would set aside a portion of a regional sales tax to be dedicated to WMATA, and these funds would not be subject to appropriations. As of April 2006, this legislation had not been enacted. Although the Maryland General Assembly considered bills in its 2006 session to provide dedicated funding to WMATA, the position of Maryland’s Department of Transportation is that the state’s current system for funding WMATA already constitutes dedicated funding. Under this system, payments are made from the state’s transportation trust fund, which has several dedicated sources, although expenditures from the fund are subject to an annual appropriations process. Maryland officials also note that state law requires them to provide funding to WMATA. On the other hand, an official with Maryland’s Office of Attorney General stated in a legal opinion dated February 17, 2006, that the transportation trust fund does not constitute dedicated funding. The six transit agencies we spoke with varied in the extent to which the dedicated revenue sources they reported to the NTD have the three characteristics we identified. Three of the transit agencies reported dedicated funding sources with all three characteristics, while the other three agencies reported dedicated funding sources that were subject to appropriations or were allocated among other transit or transportation programs. Three agencies—San Francisco’s Bay Area Rapid Transit (BART), Boston’s Massachusetts Bay Transportation Authority (MBTA), and Dallas Area Rapid Transit (DART)—have dedicated funding sources with all three characteristics. BART receives the proceeds from a regional dedicated sales tax, as established by state law. The tax is collected by the state and the proceeds are provided directly to BART by the state treasury. At MBTA, state law directs that the proceeds of a statewide dedicated sales tax are deposited into a state MBTA fund from which the state treasurer will provide funds to MBTA upon request, without an appropriation. At DART, the state comptroller collects the proceeds of a regionally dedicated sales tax and provides those proceeds directly to DART. New York’s Metropolitan Transportation Authority (MTA) receives a number of revenue streams that it considers to be dedicated, even though they are subject to appropriations by the state legislature or by local governments and they do not always consist of a specific tax or fee that is dedicated to the agency. They include: (1) local matching payments for state aid, which in addition to being appropriated may come from general revenues as opposed to a specific revenue source; (2) payments from two state funds for MTA, which are composed of the receipts of several taxes statutorily required to be deposited in these trust funds and which are subject to appropriations by the state legislature; and (3) local payments— which are appropriated—for the operation and maintenance of commuter rail stations, the amount of which is designated in statute. St. Louis Metro receives a portion of local sales taxes that are dedicated to both highway and transit purposes and that must be annually appropriated. The allocation between highways and transit is determined through the annual budgeting process and is not statutorily designated. Philadelphia’s Southeastern Pennsylvania Transportation Authority (SEPTA) receives dedicated funding from the state of Pennsylvania, which dedicates a portion of its statewide sale tax, as well as several motor vehicle-related fees, to two state trust funds to be used for aid to transit agencies statewide, not only SEPTA. Statutory formulas are used to determine how much each agency receives, and the funds are provided to transit agencies directly from the state treasury. Of the 25 largest transit agencies, all except 2—the Maryland Transit Administration, which operates Baltimore’s transit and commuter rail systems, and the Port Authority Trans-Hudson Corporation, which operates rail lines and ferryboats between New York and New Jersey— reported to the NTD that they received dedicated sources of revenue in 2003. Although the NTD provides a description of dedicated funding, the revenue that transit agencies report as dedicated may or may not have the characteristics described by the NTD. In addition to dedicated funding, other revenue sources transit agencies reported receiving were a combination of state and local appropriations and other funding, fares and other operating revenue, and federal grants. Of the total revenues those 23 largest transit agencies received in 2003 from state and local sources— including dedicated funding, general revenue appropriations, and other funding sources—the proportion that came from dedicated sources averaged 70 percent. For 12 of these agencies, between 90 percent and 100 percent of state and local funds they received in 2003 came from dedicated sources. Figure 4 shows the percentages of transit agencies’ state and local funding that came from dedicated funds, general revenue, and other funding in 2003. Most transit agencies reported receiving multiple dedicated revenue sources from state and local governments, as well as, in some cases, dedicated revenue that was directly generated. For example, GAO’s analysis of NTD data for the 23 largest agencies that have dedicated funding shows that 18 of these agencies received dedicated funds from at least two sources in 2003, with the sales tax being the source most commonly dedicated to transit (15 of the 23 transit agencies received dedicated funds from sales taxes). Sales tax also ranked at the top in revenue generation among dedicated sources; in 2003, approximately $4.5 billion or 43 percent of the approximately $10.3 billion in total dedicated revenues received by the 23 transit agencies came from sales taxes. According to the NTD data and to the transit agencies we spoke with, sales taxes dedicated to transit are levied at the state or local level and are sometimes enacted by ballot measures. All of the transit agencies we spoke with have dedicated funding that includes sales taxes, as follows: St. Louis Metro receives two separate sales taxes—one at one-half of 1 percent and one at one-quarter of 1 percent—that are levied in the localities that Metro serves. The revenues are collected by the state and remitted to the local governments to be appropriated to Metro. San Francisco’s BART receives 75 percent of a one-half of 1 percent sales tax that is levied in the counties in the BART transit district. The sales tax was first enacted in 1969 to fund the completion of the rail system, but the revenues are now used for operations. Dallas’s DART receives the proceeds of a 1 percent sales tax from the 13 cities that are served by the transit agency. This tax is part of the statewide 8.25 percent sales tax; part of it can be set aside for localities for economic development purposes, such as schools, parks, and transit. Boston’s MBTA receives 20 percent of the statewide sales tax revenues. State law designates a “base revenue amount,” which increases each year with inflation, for the amount of revenue MBTA is to receive from this tax each year. If the portion of the tax receipts designated for MBTA does not meet the base amount, the state makes up the difference. Philadelphia’s SEPTA receives a portion of the statewide sales tax. Approximately 2 percent of the revenue from this tax is deposited in state public transportation accounts and is allocated to the state’s transit agencies, including SEPTA, based on statutory formulas. New York’s MTA receives the proceeds of a three-eighths of 1 percent regional sales tax, which is used for operating costs of the commuter rail and transit systems. Local option sales taxes—in which the sales tax rate of a city or town can be raised above the rate of the state sales tax and which are enacted by ballot measures—have become more prevalent in financing a variety of transportation projects, including transit. Many ballot measures for local option sales taxes target a mix of transportation programs, including highways and transit. A transportation economist we spoke with noted a recent trend in ballot measures for sales taxes for capital projects and said that an advantage of these taxes is that they bring about fiscal discipline because the agencies have to deliver results (such as a completed capital project) within a specified time. According to this economist, in 2002, there were 43 such ballot measures, and in 2004, there were 44; in both years, roughly half of them passed. Denver, Salt Lake City, and 23 counties in California are some of the localities that have local option sales taxes that are either dedicated to transit or can be used for any mix of transportation purposes. The second most common source of dedicated funding for transit, according to our analysis of NTD data, was the gasoline tax. In 2003, 7 of the 23 agencies with dedicated funding reported receiving revenues from this source. In that year, the gasoline tax generated about $304 million or 2.9 percent of about $10.3 billion in total dedicated revenues received by those 23 agencies. Of the 6 transit agencies we spoke with, 2—New York’s MTA and San Francisco’s BART—had revenue from a dedicated gasoline tax. Some of the transit agencies we spoke with also use other sources of dedicated revenue, such as mortgage recording taxes, city and town assessments, and motor vehicle-related fees. Following are some examples: New York’s MTA receives funds from a mortgage-based tax. New York City and the seven other counties within MTA’s service area collect a tax based on a percentage of the debt secured by real estate mortgages and provide the receipts to MTA. Boston’s MBTA receives funds from assessments it makes on the 175 cities and towns in the MBTA district. The assessments are based on a weighted population formula. New York’s MTA and Philadelphia’s SEPTA receive funds from various motor vehicle fees (e.g., MTA receives funds from registration and other fees and SEPTA receives funds from car leasing and car rental fees.) Transportation experts we spoke with said that using a basket of revenue options lowers transit agencies’ economic risk because different revenue sources are affected to different degrees by fluctuations in economic activity and other factors, and that a diversity of revenue sources helps to ensure a steady revenue stream. Additionally, these experts said that specific revenue sources are selected based on the conditions of the local economy, with the goal of having less volatility. In 2003, 24 of the 25 largest transit agencies in our analysis reported spending dedicated funds for operating expenditures, capital expenditures, or a combination of both, according to our analysis of NTD data. Of those 24 agencies, 20 spent dedicated funds for a combination of operating and capital expenditures. Having the flexibility to spend dedicated revenues on operations or capital has advantages for transit agencies, according to agencies and transportation experts we spoke with. One transportation expert we spoke with noted that agencies that have flexibility to spend dedicated funding on operations or capital are better off because the agency can adjust to cost changes. Transit agencies noted the following reasons why this flexibility is advantageous: Spending on capital projects fluctuates. For example, capital projects might need up-front funding in one year but not in the next. The construction of capital projects, such as extending a rail line, typically creates a need for operating expenditures, so dedicated funds used to build a capital project might later be used for operating expenses once the project is implemented. Regional and agency priorities may change, which may require a shift in how funds are used. Transit agencies we spoke with did not cite any disadvantages of having the ability to spend dedicated revenues on both types of expenditures. Some agencies are subject to restrictions on how they spend dedicated revenues, as illustrated in the following examples: Philadelphia’s SEPTA must use the dedicated revenue it receives from the state for capital projects, debt service, and asset maintenance. San Francisco’s BART uses revenues from the dedicated local sales tax and property assessments for operating expenditures. These are the only local sources of operating support the agency receives. Dallas’s DART is subject to an operating expenditures cap, which was enacted by its Board of Directors. Growth in operating expenses must not exceed 90 percent of the inflation rate. Expenditures from dedicated sources are subject to the same type of oversight as expenditures from other sources, which at transit agencies includes a board of directors involved in capital planning and periodic audits by federal and state auditors. FTA does periodic reviews of all transit agencies that receive funding from FTA, including procurement system reviews, financial management oversight reviews, and drug and alcohol oversight reviews. Some of the transit agencies whose officials we interviewed are subject to oversight and review as follows: DART’s expenditures are subject to review by its internal auditor (which reports directly to the Board of Directors), a state auditor, and the Texas Department of Transportation. DART’s Board of Directors and the cities in the Dallas region that are served by DART review DART’s budget annually. MBTA has an internal audit department that reports to the general manager. A state auditor also reviews certain programs and areas of MBTA on an annual basis, and a state inspector general’s office reviews MBTA. The state auditor has a suboffice in MBTA’s office building with dedicated officials reviewing transportation programs. Internal and state audits focus more on program reviews than on financial audits. The state legislature sometimes has hearings on MBTA, generally for capital projects. Finally, the MBTA Advisory Board, which is made up of representatives from each city and town within the MBTA district, approves MBTA’s mass transportation program and its annual budget. MTA is required to file reports each year with state legislators and other officials certifying the proper use of the dedicated funds, and the state comptroller is authorized to audit MTA’s financial records. MTA also has an office of inspector general, which does programmatic reviews and investigations. While spending safeguards do not generally vary based on the source of revenue, safeguards can vary depending on whether funds are used for operations or capital. Major capital projects funded by FTA are monitored to ensure they are progressing on time, within budget, and according to approved plans, and agencies that issue debt to finance capital projects must make debt repayments within specified time frames. Also, agencies’ capital projects require the review and approval of the board of directors, whose review sometimes includes a public approval process. MTA’s 5-year capital program, for example, is subject to an extensive public approval process that is coordinated through the board of directors. MBTA also has an open capital planning process that is subject to a lengthy public review process. On the operations side, one agency we spoke with—DART—as noted earlier, has a cap on expenditures for operations that was enacted by its Board of Directors, which dictates that growth in operating expenses must not exceed 90 percent of the inflation rate. Although dedicated funds are generally subject to the same type of oversight as funds from other sources, the implementation of dedicated funding at transit agencies sometimes has been accompanied by enhanced oversight: When state legislation established dedicated funding for SEPTA, it also required the Board of Directors to be expanded to include four additional members appointed by the state. According to a SEPTA official, the state said that since it was going to be shouldering a greater percentage of SEPTA’s costs, it should have more of a voice in how SEPTA was run. State legislation establishing dedicated funding sources for MTA in the 1980s also established oversight mechanisms, including a capital planning board, an inspector general, and a committee on the Board of Directors for capital program oversight. Dedicated funding is an important revenue source for transit agencies because it enhances their planning of future expenditures and increases their access to bond markets due to better predictability of revenue. With regard to planning, according to five of the six agencies we interviewed, dedicated funding makes revenue more predictable, thereby enabling more effective multiyear planning. With regard to raising revenue through the issuance of bonds, all of the agencies we spoke with have used dedicated funds to issue bonds for capital programs and projects. In addition, four of the six agencies we interviewed said that dedicated funding either allowed them to issue bonds or improved their credit rating. An improved credit rating generally allows agencies to issue bonds at a lower rate, thereby decreasing the cost of borrowing for capital projects. For example, SEPTA used the funds from a 1992 dedicated funding package to support the issuance of bonds for capital needs. SEPTA inherited most of the commuter rail service formerly provided by Conrail, which required major repairs to stations, bridges, tracks, and overhead power. The officials we spoke with representing local governments and transportation departments in the District of Columbia, Maryland, and Virginia, and NVTC, also cited a number of advantages of dedicated funding for WMATA. Officials from five of these entities stated that a consistent and known source of revenue would enable WMATA to plan more efficiently for future expenditures. Another local official said that dedicated funding would also allow WMATA to provide a consistent level of quality. That WMATA stands to receive $1.5 billion in additional federal contributions if dedicated funding is established is an advantage cited by officials from one local jurisdiction, as well as NVTC. An analyst with one of the major credit rating agencies told us that dedicated funding is one factor that can strengthen transit agencies’ bond ratings. According to this analyst, who has expertise in transit, dedicated funding can provide better access to the capital markets, but any effect on the cost of borrowing will depend on how the dedicated funding is structured. For example, a dedicated revenue stream is more stable if the legislation creating it is difficult to reverse. Additionally, requiring that revenues be spent first on debt servicing is looked upon favorably by bond rating agencies. This analyst also noted that the credit-rating history of transit agencies—including WMATA—is based partly on that of the local or state jurisdictions that provide the agency with subsidy payments. WMATA has a good, steady credit-rating history in part because of the high credit ratings of its member jurisdictions. The key downside to WMATA’s current funding arrangement is the appropriations risk—that local jurisdictions might not make their payments or might be late. However, the analyst noted that the jurisdictions supporting WMATA had a long history of making payments on time, which, to a certain degree, offsets the risk of appropriations. Furthermore, although dedicated funding could also offset the appropriations risk—if the dedicated revenue source were structured so that it was not subject to appropriations—it could increase the risk associated with the revenue source, such as economic fluctuations. Despite the advantages of dedicated funding, there are risks of revenue volatility and a loss of budgetary flexibility for governments supporting transit agencies. Although the transit agencies we spoke with cited the predictability of revenue as an advantage of dedicated funding, they also acknowledged that a risk of dedicated funding is that it may be too volatile or not meet funding expectations. For example, BART is largely dependent on local sales tax revenues for operating expenses; when the local economy began declining in 2000, revenues were no longer sufficient, leading BART to cut operating costs and raise fares. According to three transit agency officials we spoke with, it can be difficult or impossible to obtain additional money from state and local governments that have already provided the agencies with dedicated funds. For example, although SEPTA officials told us that their dedicated revenues were too small a proportion of their overall funding to enhance the agency’s long-term planning ability, they said that they had been unsuccessful in obtaining additional dedicated funding. Moreover, not all transit agencies have the authority to raise tax rates or fees themselves. In addition, local option taxes for capital projects are a potentially problematic means of providing dedicated funding in that, although they do provide agencies with additional funds, they often expire after a certain number of years, requiring agencies to have another ballot measure or to find other ways to increase revenue. However, some transit agencies benefit from laws that mitigate the risk of revenue fluctuations associated with dedicated funding sources. MBTA, for example, is protected by legislation designating that 20 percent of the statewide sales tax revenues go to the agency; this legislation specifies a base revenue level that changes each year with the inflation rate. If revenues do not meet the base level, the state makes up the difference. MTA also has access to additional funding if the local matching shares for state operating assistance are insufficient. If localities are unable to provide the matching funds to MTA, the state takes out the shortfall from the amount the locality would have received in state aid, and provides it directly to MTA. Additionally, some agencies have reserve funds they can draw on if revenues are not sufficient. Officials in the Washington, D.C., region identified similar concerns when discussing what they believe the effects of dedicated funding for WMATA might be. An official from one local jurisdiction stated that a dedicated funding system is only as reliable as its funding source; another local official said that revenues dedicated to WMATA from a specific source may fluctuate from year to year with changing economic conditions. Regarding the loss of budgetary flexibility, we have previously reported that setting government funds aside for a specific use—such as with federal trust funds—may affect the funding available for other spending priorities. We also reported that constituencies may create pressure to spend revenues that are set aside for a specific purpose, regardless of the need for the spending at the moment or the priority that would otherwise be given such spending. Some of the officials in the Washington, D.C., region we interviewed also cited disadvantages related to state and local budgeting. Maryland officials from the state Department of Transportation and a local jurisdiction noted that funding dedicated strictly for WMATA may reduce funds available for other transportation programs. Officials from another local jurisdiction also noted that revenue dedicated from an existing tax—rather than from a new source of revenue—reduces that locality’s general fund and decreases spending flexibility. In light of the proposed federal legislation to provide additional funding to WMATA (H.R. 3496), state and local officials are faced with two main issues, should they choose to enact dedicated funding for WMATA: (1) which revenue source or sources to dedicate to WMATA and (2) whether and how to address a WMATA budgetary shortfall. The two issues are not necessarily linked since implementing a dedicated revenue source does not automatically require a change in revenue sources or in the amount of revenue collected. Important considerations in selecting a revenue source or sources to be dedicated to WMATA are the stability and long-run adequacy of the revenue source, as well as the political feasibility of the size of the tax or fee rate necessary to provide sufficient revenue to WMATA. In evaluating revenue sources to provide additional funding to WMATA, equity, efficiency, and administrative cost are potentially important considerations. One key budgeting consideration identified in the economics literature that is relevant for establishing a dedicated revenue source, from the perspective of the transit agency, is year-to-year revenue stability. Year-to- year revenue stability refers to the degree to which both short-term fluctuations in economic activity (the business cycle) and other factors not directly linked to the business cycle influence dedicated tax revenues. The revenue stability of different taxes and fees with respect to economic fluctuations is often compared by estimating the percentage change in year-to-year revenues that results from a 1 percent change in year-to-year income levels. The variability in these estimates is then used to evaluate the relative magnitude of fluctuations not related to the business cycle. A greater degree of variability in the estimate of economic response indicates a higher degree of instability from noncycle variations. Year-to-year revenue stability is an important consideration because it influences the ability of a government or agency to carry out effective planning and budgeting. A stable revenue source is not subject to substantial year-to-year fluctuations, making it easily predictable. Greater predictability leads to more accurate revenue forecasts and allows for better budgeting and planning as it reduces the probability of a significant funding shortfall (or surplus) in any given year. In the longer run, an important consideration for WMATA’s financial health is that the revenues yielded by a dedicated source adequately keep pace with increases over time in transit expenditure demands. Although many economists used to believe that there was a trade-off between year- to-year stability and long-run revenue growth, current research suggests that a revenue source can exhibit relatively high long-run growth and be relatively stable. Long-run revenue adequacy is measured by how revenues are expected to grow over time as income grows. The relationship between income levels and the revenue generated by a tax or fee is a convenient benchmark for comparing different revenue sources. However, to assess the adequacy of a revenue source for transit spending, one would need to know how transit demand (and, consequently, spending) is related to income. There is considerable uncertainty, however, about the relationship between income growth and growth in demand for transit services. As a result, it is uncertain what relationship between revenue growth and income growth over the longer run is necessary to ensure that revenues will adequately keep pace with transit expenditure demands. Estimates of long-run revenue growth rates for a given revenue source often differ at the state or county level, creating further potentially important budgetary and political implications for dedicated funding for WMATA, which has a service area that encompasses multiple jurisdictions. From a budgeting perspective, these jurisdictional differences should be taken into account to arrive at accurate forecasts of transit revenues. Political concerns might arise because of different revenue growth rates in the Compact area, which could mean that the allocation of payments among jurisdictions could change over time unless tax or fee rates are adjusted or floors and ceilings are placed on contribution levels. Another key consideration in choosing a revenue source to be dedicated to WMATA is the tax or fee rate required to dedicate a specified amount of revenue from that source—that is, the rate required may influence the choices of state and local officials among various revenue sources. In general, the rate required will be smaller when the tax or fee is applied to a larger base. As part of its analysis of WMATA’s funding issues, the Metro Funding Panel estimated the tax or fee rate required to generate specified amounts of dedicated revenue from six potential revenue sources. The specified amounts were based on different categories of WMATA’s spending that could be covered by dedicated local revenues. For example, the panel estimated the tax or fee rate required to dedicate $148 million in 2010. That amount represents 50 percent of what the panel estimated would be needed for capital spending to renew aging components of the WMATA system and add system capacity to meet growing demands, plus operating spending related to this capital investment. In addition to its estimates of the level of dedicated revenue needed to fund different specified levels of spending, the panel made several critical assumptions in developing its estimates of the tax or fee rate required. The accuracy of these assumptions will affect the accuracy of the panel’s estimates. Figure 5 provides a summary—based on our analysis of the economic literature and the Metro Funding Panel report—of how these six revenue sources (sales tax, payroll/income tax, motor vehicle fuel tax, property tax, access fees, and vehicle registration fees) compare according to stability, long-run adequacy, and tax or fee rate required. Additional analysis of each of the taxes and fees, and how they compare with respect to the key considerations, is presented in appendix II. Experts and state and local officials commonly identified the economic considerations of equity, efficiency, and administrative cost as potential key considerations in evaluating revenue sources. However, these considerations are only relevant if the amount of revenue or the methods for its collection are altered. In the context of dedicated funding for WMATA, these considerations come into play primarily when addressing a shortfall. State and local governments could establish a dedicated funding source for WMATA without increasing their revenue collections. However, if they want to provide WMATA with enhanced funding to address the revenue shortfall identified by the Metro Funding Panel, they would have to take some offsetting fiscal action, such as increasing their revenues, reducing their spending on other functions, or taking money from available surplus revenues, if any. In the case of raising additional funding for WMATA, administrative costs are likely to be a more important decision factor than equity or efficiency, particularly if the state and local jurisdictions choose to implement at the state or local level a tax or fee that is not currently being administered at that level of government (even though the tax or fee might be collected by another level of government). Conversely, administrative costs are likely to be small if the tax or fee is already being collected at the desired level of government. Equity and efficiency effects are likely to be small given that the additional amount of revenue collected for WMATA would be small in relation to the overall state and local government operations. Possible exceptions are the vehicle fuel tax and vehicle registration fees, which might require larger rate increases because of their relatively small bases. Possible administrative, equity, and efficiency effects are discussed in appendix III. To establish dedicated funding as defined in H.R 3496 (i.e., a revenue source that is legislatively directed solely to WMATA), each Compact jurisdiction would need to enact legislation directing a specific revenue source or sources to WMATA. The legislative process for enacting such legislation in the District of Columbia, Maryland, and Virginia is as follows: Legislation in the District of Columbia would be taken up first by the District of Columbia City Council. If passed by the council and signed by the District of Columbia mayor, the WMATA legislation would require approval by Congress. In Maryland, if it is determined that the state’s current system for funding WMATA does not constitute dedicated funding, the General Assembly could pass legislation—signed by the governor—to order a tax rate change, shift the use of an existing tax, impose a new fee or surcharge, or enable localities to enact a tax rate change. Dedicated funding legislation could have a local or statewide scope. If the scope were local—which would be a departure from the current funding structure—local input would be considered and a consensus would be reached by a local delegation; the legislation would then go to the state legislature for final approval. If the scope of the legislation were statewide, the legislation would not require initial approval by the local delegation and would go directly to the state legislature. Legislation in Maryland to enact dedicated funding for WMATA may also have to address the issue of parity in transit spending across the state, particularly between the Baltimore and Washington, D.C., regions. If additional funds are raised for or dedicated to WMATA, there may need to be additional funds provided for other state transit programs. In Virginia, the General Assembly could pass a law—signed by the governor—ordering a tax rate change or redirecting existing taxes statewide to establish dedicated funding. According to local officials in the state, to change the existing tax structure, Northern Virginia jurisdictions would have to be given the authority to raise or dedicate a tax by the General Assembly, unless that tax is currently under local control. State legislation can require local approval through a voter referendum or a vote by the local governing bodies. Legislation concerning a dedicated funding system for WMATA may or may not need the approval of the local jurisdictions it would encompass. The state also determines whether a tax shall be statewide in scope or limited to certain localities. Bills were introduced in 2006 in the District of Columbia, Maryland, and Virginia that would dedicate a portion of sales tax revenues to WMATA. These bills differ from one other in a number of aspects, and they also differ from the approach recommended by the Metro Funding Panel, which included dedicating a sales tax increase of one-quarter of 1 percent across the WMATA Compact area to be used for capital maintenance and system enhancement. The panel also recommended that the proceeds of the regional sales tax be in addition to the jurisdictions’ current payments for operations and capital. Table 1 provides details on the legislative proposals and the panel’s recommendation. As of April 2006, dedicated funding legislation in the District of Columbia had been enacted by the city but had not yet received congressional approval. Additionally, this legislation will not take effect until H.R. 3496 and dedicated funding laws in Maryland and Virginia are passed. In Maryland, two of the bills—one in the House and one in the Senate—that were originally introduced to provide dedicated funding, were amended to remove the dedicated funding provisions and to add language requiring that the Maryland Department of Transportation (MDOT) undertake a study on the state’s transit costs and funding strategies, as noted earlier. The amended bills do not provide any funding for transit. The other dedicated funding bills in Maryland did not proceed beyond the committee level. In Virginia, dedicated funding legislation was approved by the Senate but not by the House. However, the Virginia proposal to dedicate a one- quarter of 1 percent sales tax levied in Northern Virginia to WMATA is included in the Senate’s budget proposal, so, as of April 2006, it was still possible that dedicated funding could be enacted through this vehicle. As discussed earlier in this report, although legislators in the Maryland General Assembly have introduced dedicated funding bills, MDOT’s position is that the state’s current system for funding WMATA already constitutes dedicated funding. On the other hand, an official with the state’s Office of Attorney General said in a legal opinion dated February 17, 2006, that the fund does not constitute dedicated funds for WMATA. An MDOT official we spoke with said that Maryland would consider making adjustments to the trust fund to meet the goal of dedicated funding. As currently written, H.R. 3496 requires the WMATA Compact to be amended, a process that entails state legislation and congressional consent. H.R. 3496, as amended, requires the WMATA Compact to be amended to require that (1) all payments from the Compact jurisdictions come from a dedicated source, (2) WMATA establish an inspector general, and (3) four federal representatives be added to the WMATA Board of Directors, one of whom must be a regular Metrobus or Metrorail rider. To amend the WMATA Compact, identical legislation—which would be separate from legislation establishing dedicated revenue sources—must be enacted by the states of Maryland and Virginia, and the District of Columbia, and must be consented to by Congress. No amendment can be enacted until this process is complete. According to our legal analysis of the WMATA Compact and H.R. 3496, amending the Compact would not be necessary for the WMATA Compact jurisdictions to establish dedicated funding or to create an inspector general for the agency, but would be necessary for changing the structure of WMATA’s Board of Directors: It is unnecessary to amend the WMATA Compact for jurisdictions to provide payment to WMATA from dedicated sources of funding. However, if the Compact were amended to require dedicated funding, then the jurisdictions would be bound to this requirement as long as it remains in the Compact. The Compact does not specify what the source of the jurisdictions’ payments to WMATA shall be nor how WMATA’s costs are to be allocated among the jurisdictions. WMATA could establish an office of inspector general without amending the Compact; however, some provisions in H.R. 3496 about the inspector general’s office conflict with the Compact. For example, H.R. 3496 would require a unanimous vote of all board members to remove the inspector general. Under the Compact, most actions by the board do not require a unanimous vote; rather, they require a majority vote and the majority must include at least one board member from each of the three jurisdictions. Conflicts such as this between H.R. 3496 and the Compact could be resolved through an amendment to either one. The WMATA Board of Directors voted in April 2006 to create an office of inspector general. WMATA’s policy outlining the structure and functions of this office is similar to the provisions in H.R. 3496, although WMATA officials told us that they wrote this policy to avoid any conflicts with the WMATA Compact. Adding federal representatives to the Board of Directors would require a Compact amendment because the Compact specifically sets forth the composition of the board, which is composed of six members, two each from the District of Columbia, Maryland, and Virginia. Legislation that would amend the WMATA Compact as required by H.R. 3496 has not been proposed in the District of Columbia or Virginia, according to a WMATA official. Such legislation was introduced in the Maryland General Assembly in February 2006, but was later withdrawn. Currently, the jurisdictions are more focused on enacting legislation to establish dedicated funding. Additionally, officials with the transportation departments of the District of Columbia, Maryland, and Virginia noted that even if H.R. 3496 is enacted, there is no guarantee that federal funding for WMATA would be appropriated. Also, officials in Northern Virginia questioned whether Compact amendments are necessary to implement the requirements of H.R. 3496 to establish an inspector general and to provide dedicated funding to WMATA. There is no clear consensus among Compact jurisdictions about which legislation—amending the Compact or revenue legislation—should be dealt with first. Although Maryland officials stated that it makes more sense to amend the Compact to establish oversight first, the officials we spoke with in the District of Columbia and Virginia stated that enacting revenue legislation should be the first priority before trying to amend the Compact. The schedules for considering legislation in the District of Columbia, Maryland, and Virginia are different in each jurisdiction. In the District of Columbia, council members can file legislation to be introduced at any time during normal business hours, unless the council is at recess. The council generally meets to vote on legislation on the first Tuesday of every month. The legislative sessions of the Maryland and Virginia general assemblies both begin annually in January. In Maryland, the session adjourns after 90 days; bills may be filed throughout the 90-day session, but bills introduced after the 21st day of the Senate’s session and the 31st day of the House’s session need special approval before they are returned to the floor. In Virginia, the adjournment date varies based on the legislative year and whether the General Assembly chooses to extend the session, and the deadline for filing legislation is in January, the same month the session begins. The short legislative sessions and large volume of bills leave a limited window for considering and passing dedicated funding legislation for WMATA. WMATA’s funding partners face a number of issues that will need to be resolved should they choose to provide WMATA with dedicated funding. As discussed previously, the Compact jurisdictions have differing views on what constitutes dedicated funding, with Maryland officials having different opinions on whether their current system for supporting WMATA is dedicated, and the District of Columbia and Virginia viewing dedicated funding as a specific source statutorily dedicated to WMATA. In addition to addressing this fundamental issue, the jurisdictions must also resolve the following issues: what proportion of the jurisdictions’ payments to WMATA would come from dedicated sources and how to mitigate the risk associated with dependence on these sources; whether dedicated funding would result in a net increase in the amount WMATA receives from the Compact jurisdictions and what portion of the total amount dedicated to WMATA each jurisdiction would pay; whether dedicated funding should be used exclusively for WMATA’s capital or operating needs, or both; and whether increased oversight of WMATA is needed to ensure adequate accountability for dedicated funds. There is currently no agreement among WMATA’s stakeholders—at the local, state, and federal levels—as to what proportion of the Compact jurisdictions’ total payments to WMATA should come from dedicated funding. Although as currently written, H.R. 3496 would require that all state and local contributions come from dedicated funding, no jurisdiction has offered a proposal that would meet this requirement and none of the state and local officials we spoke with indicated that measures to fulfill this requirement are likely. As noted earlier, among the 23 largest transit agencies with dedicated funding, an average of 70 percent of their state and local contributions came from dedicated sources in 2003. Among the District of Columbia and the cities and counties in Maryland and Virginia, officials from two jurisdictions expressed support for providing all payments to WMATA from dedicated sources, although officials from one of these jurisdictions also recognized that such an approach was not likely to have regional support. As a result, WMATA’s stakeholders will need to determine a dedicated funding level that is acceptable to all parties. If a large proportion of WMATA’s state and local contributions were to come from dedicated funding, stakeholders would also need to determine how the risk of revenue volatility would be balanced between the jurisdictions and WMATA. The revenue sources chosen for dedicated funding for WMATA may fluctuate from year to year, requiring the transit agency to work within the constraints of the available revenue or necessitating additional appropriations from the state and local jurisdictions supporting WMATA. Legislation establishing dedicated funding for other transit agencies sometimes provided safeguards for revenue streams, such as specifying an annual revenue floor. These safeguards can better protect the transit agencies from revenue fluctuations, but the state or local government bears the burden of ensuring adequate revenue to the transit agency each year. WMATA’s funding partners will need to determine the extent to which they or WMATA should take on this risk. Officials we interviewed from each of the localities in Maryland and Virginia said that dedicated funding should result in a net increase in payments to WMATA. Officials from two Virginia jurisdictions elaborated, saying that dedicated funds could be used both to replace part of the current subsidy payments jurisdictions currently make from their general funds, and to provide additional funding to WMATA, to result in an overall increase. An official from another Virginia jurisdiction said that dedicated funding should only result in a net increase in the jurisdictions’ payments to WMATA if the federal government participates in supporting WMATA. Officials from the District of Columbia did not offer an opinion on this topic. Although the officials we interviewed generally said that dedicated funding should be used to increase their financial support of WMATA, the dedicated funding proposals introduced in the region are not all clear about whether dedicated revenues are to be in addition to the jurisdictions’ current payments. The legislative proposals introduced in Maryland and Virginia state that the dedicated revenues are not meant to reduce or replace other funding sources. However, because these proposals do not explicitly state that dedicated revenues will be used to provide WMATA with additional funding—above the jurisdictions’ current level of payments—it remains unclear if these proposals would result in a net increase in the payments to WMATA. The District of Columbia’s legislation does, however, state that its purpose is to provide additional payments to WMATA. Regardless of whether dedicated funding results in a net increase in the amount of payments to WMATA, the region would need to determine what portion of the total amount dedicated to WMATA each jurisdiction would pay—that is, whether the amount of payment from dedicated sources would be based on current allocation formulas or would be determined using another means. None of the legislative proposals explicitly states how the amount of payments to WMATA from dedicated revenues would be determined, but local officials we interviewed did express views on this matter. The District of Columbia officials we spoke with said that the amount of payments to WMATA from dedicated revenue sources should not be determined using the current allocation formulas. These officials said they believe the burden of providing financial support for WMATA should be more evenly distributed across the three major jurisdictions. This view is reflected in the District of Columbia’s legislative proposal, which includes a provision that would require Maryland and Virginia to dedicate an amount of revenue at least equal to that dedicated by the District of Columbia, although it does not specify how that amount would be determined. Officials from two of the Northern Virginia Compact jurisdictions and from the Virginia Department of Transportation stated that they believed the current allocation formulas should also be applied to dedicated revenues provided to WMATA. Neither the Virginia legislation nor the Maryland legislation explicitly states how the relative size of payments to WMATA would be determined. State and local officials from Maryland, along with other Virginia officials we met with, did not express strong views about whether the current allocation formulas would be applied to additional funding provided to WMATA. Whether additional funds provided to WMATA from dedicated sources are distributed to WMATA based on the existing allocation formulas or using another means could have an effect on the distribution of payments among the jurisdictions. For example, using the approach recommended by the Metro Funding Panel—in which all local Compact jurisdictions would provide the entire proceeds of a one-quarter percent or one-half percent sales tax to WMATA—the amount of funds that each jurisdiction would provide to fund WMATA’s estimated capital shortfall would be based on the jurisdiction’s tax receipts, rather than on the allocation formulas. As a result, the payments would be shifted away from the District of Columbia and Maryland and toward Virginia. Table 2 compares the current distribution among the jurisdictions of payments for WMATA’s operating subsidy and capital improvement program to the distribution of additional payments for the estimated capital shortfall using a dedicated regional sales tax. Whether funds are used for operations, capital projects, or both has implications for key issues, such as the purpose of the dedicated funding and the appropriate amount of that funding. The Metro Funding Panel proposed that dedicated funding be used to cover WMATA’s budgetary shortfall, which the panel projected would occur largely due to planned capital expenditures. Dedicated funding legislation introduced in Maryland and Virginia states that funds are to be used for operations and capital, while the District of Columbia’s legislation states only that funds are to be used for “maintaining and improving the transportation system [of WMATA].” Officials we spoke with from local jurisdictions also had varied views on this topic: Representatives from three of the eight local jurisdictions stated that dedicated funding should go toward funding WMATA’s capital needs, citing the following advantages: (1) Capital planning benefits from the predictability of dedicated funding because such planning tends to involve multiple years; (2) WMATA’s unfunded needs are mostly capital needs related to system rehabilitation and capacity, a conclusion reached by the Metro Funding Panel report; and (3) an annual subsidy is already in place to fund operations. Representatives from two other local jurisdictions stated that dedicated funding should be used for both operations and capital needs. They noted that: (1) operations and capital programs can both benefit from the stability provided by dedicated funding, (2) transit agencies can be more efficient when given the flexibility to use funds for either purpose, and (3) making operating payments to WMATA from dedicated funding, rather than from the jurisdictions’ general funds, can make budgeting easier for both WMATA and the jurisdictions. Representatives from three of the eight local jurisdictions had no opinion on whether dedicated funding should be used for operations or capital needs. In earlier testimony on WMATA, we highlighted the importance of having reasonable assurances that if WMATA were to receive additional funds, it would spend these funds effectively. H.R. 3496 would make additional federal funding contingent upon WMATA’s establishing an office of inspector general, and, in April 2006, the WMATA Board of Directors approved a resolution that would establish such an office. The issue of appropriate oversight was also discussed by regional stakeholders during a summit on dedicated funding for WMATA in October 2005. Summit participants—who included state and local officials from the Compact jurisdictions—agreed that steps should be taken to improve oversight of WMATA. Additionally, U.S. Representative Albert Wynn, whose district includes parts of Prince George’s and Montgomery Counties, sent a letter to WMATA urging the Board of directors to create an “independent investigative authority” to study WMATA’s budgets, plans, purchases, and employee relations with the goal of improving operations and alerting the public to problems. Officials in six out of eight local jurisdictions, as well as an official with a state department of transportation, told us either they were concerned that a loss of governance could occur with dedicated funding or that it is important to have accountability mechanisms in place with dedicated funding. For example, one official said that additional oversight of WMATA is necessary and particularly important if WMATA is given greater control over its revenue stream through dedicated funding. An official from a state transportation department said it was important to improve oversight of WMATA through such steps as increasing access to WMATA’s financial and operating data if WMATA were to receive additional funds. We provided copies of a draft of this report to WMATA and the U.S. Department of Transportation for their review and comment. We received comments, consisting of technical clarifications, from officials from the Department of Transportation’s Office of Budget and Policy and from WMATA’s Interim General Manager, Auditor General, and Director of Intergovernmental Relations, which we incorporated in the report, as appropriate. We also provided officials from the District of Columbia, Maryland, and Virginia with an opportunity to comment on segments of the report pertaining to their legislative processes and the dedicated funding bills introduced in their legislative bodies. These officials also provided technical clarifications, which we incorporated in the report, as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and to the Secretary of Transportation, the Interim General Manager of WMATA, and officials in the state and local jurisdictions with whom we spoke. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine the characteristics of dedicated funding, and how it affects transit agencies and state and local governments, we reviewed the literature on transit agencies’ use of dedicated funding and interviewed representatives of a major credit rating agency and the Government Finance Officers Association. We performed semistructured interviews with six transit agencies—Bay Area Rapid Transit, Dallas Area Rapid Transit, Massachusetts Bay Transportation Authority, New York’s Metropolitan Transportation Authority, Southeastern Pennsylvania Transit Authority, and St. Louis Metro—which we selected to include a cross section of characteristics that are similar to the Washington Metropolitan Area Transit Authority (WMATA), including size of total budget, modes operated, age of rail system, and service area. Additionally, we selected agencies that had a diversity of dedicated revenue sources. We also reviewed the legislation establishing dedicated funding and budget and financial documents from some of these agencies. We analyzed financial data for the 25 largest transit agencies using information in the Federal Transit Administration’s (FTA) National Transit Database (NTD). The NTD contains financial data reported to FTA by transit agencies. We used 2003 data, the most recent year for which data were available at the time of our analysis. We selected the top 25 agencies based on the size of their combined operating and capital budgets. To assess the reliability of the NTD, we interviewed an FTA official knowledgeable about the database and reviewed pertinent documentation. FTA has several processes in place to assure the reliability of the NTD data, including the following: The data that agencies report have to be reconciled against the agencies’ own audit reports. The data are then certified by the chief executive officer of the agency. FTA uses an automated program that checks transit agencies’ current year entries against the previous year’s data. If an inconsistency is identified, the general manager of the agency is contacted to verify the information. The NTD system is backed up every hour to ensure that in the event of a power loss or other disruption to the system, the data would not be lost. We also compared the NTD data with the information we received from our interviews with the six transit agencies, as well as with some agencies’ budget and financial documentation. Generally, the information we received from transit agencies supported the information in the NTD. However, there was one instance in which we recategorized a revenue source reported by an agency. MBTA reported the revenue from local assessments as general revenues; these assessments, according to MBTA officials and financial documents, are dedicated. Because our analysis of NTD data included determining the proportion of state and local contributions that come from dedicated sources, we placed the revenue from local assessments into the dedicated category. Based on our assessment, we determined that the NTD data were sufficiently reliable for our purposes. To compare potential revenue sources that could be used as dedicated funding for WMATA, we reviewed literature on the economics of state and local public finance and mass transit funding and the Metro Funding Panel report, and met with experts in state and local public finance and the financing of transportation and with staff from the Metro Funding Panel. Based on that review and those discussions, we identified year-to-year revenue stability, longer-run revenue adequacy, and the tax or fee rate necessary to yield a specified amount of revenue as key considerations for choosing a revenue source to dedicate to WMATA; identified equity, efficiency, and administrative cost as additional considerations that could be affected if there are tax and fee increases to provide additional funding to WMATA; identified the sales tax, the payroll or income tax, the motor vehicle fuel tax, the property tax, access fees, and vehicle registration fees as revenue sources that we would compare; and assessed these revenue sources based on the considerations identified. To determine the major actions required to establish dedicated funding for WMATA, what progress on these actions has been made so far, and what issues related to dedicated funding have emerged, we interviewed the following state, local, and regional officials, including the chief administrative officers or other appropriate official from each of the eight local Compact jurisdictions or their representatives; the Director of Transportation from the District of Columbia and the Secretaries of Transportation from Maryland and Virginia; members of the Maryland General Assembly, the Virginia General Assembly, and the City Council of the District of Columbia—we selected officials who sit on committees that would be involved in dedicated funding legislation; officials from the Offices of the Parliamentarian at the U.S. House of Representatives and the U.S. Senate; representatives from the Northern Virginia Transportation Commission; and WMATA officials, including representatives from the Board of Directors, the Office of Policy and Intergovernmental Relations, and the Office of General Counsel. We also reviewed dedicated funding legislation that was proposed in the District of Columbia, Maryland, and Virginia, and Maryland statutes pertaining to the payment of WMATA. We performed a legal analysis of how the requirements of H.R. 3496 compare with the provisions in the WMATA Compact. We also attended public meetings relating to dedicated funding for WMATA, including an October 2005 regional summit and hearings of the District of Columbia City Council and the Maryland General Assembly. The following paragraphs provide a summary—based on our analysis of the economic literature and the Metro Funding Panel report—of how six revenue sources (the sales tax, the payroll/income tax, the motor vehicle fuel tax, the property tax, access fees, and vehicle registration fees) compare according to stability, long-run adequacy, and tax or fee rate required. Previous studies suggest that revenues from the sales tax are more susceptible to economic fluctuations than property or fuel tax revenues. Sales tax revenues are susceptible to economic fluctuations because they are dependent on consumer purchases, and these purchases vary with changes in income. Studies estimate that the economic fluctuations of retail sales tax revenues are about the same as those for income tax revenues, but that the sales tax is less prone to random variations. Economic estimates suggest that sales tax revenues are more stable if the tax base includes items for which purchases remain relatively constant. These items are commonly referred to as necessities, including food, clothing, and prescription drugs. However, caution is needed when applying these results because there can be significant variations at the state level. For instance, the results of two studies suggest that Maryland sales tax revenues are more responsive to economic fluctuations than are sales tax revenues in Virginia. In terms of long-run revenue growth, there is a general consensus in the economics literature that sales tax revenues do not keep pace with overall economic expansion. This slower growth, compared with income, occurs because retail sales usually take up a declining share of income as income rises. Two studies produced very similar state-specific estimates for Maryland and Virginia consistent with this finding; a 10 percent increase in total personal income is associated with a roughly 8 percent increase in sales tax revenues. Economic estimates also suggest that tax bases that include food have lower levels of long-run growth than those that exclude food, although bases including food tend to be more stable. In addition, other results suggest that sales tax revenues grow faster as income rises when the sales tax base includes more services because spending on the service sector has been rapidly increasing. Collecting a specified amount of dedicated revenue from a general sales tax usually requires a relatively small tax rate because the base to which that rate would be applied is relatively large. When retail purchases of many services, as well as goods, are taxed, the base is particularly large and a smaller tax rate would be needed than if the tax applied only to retail purchases of goods. When retail purchases of major categories of goods, such as food purchases from grocery stores, are excluded, then the base is smaller and a higher tax rate would be needed. The Metro Funding Panel estimated that a sales tax rate of 25 cents per $100 of taxed retail sales throughout the WMATA Compact region would be required to collect $148 million in 2010. Compared with some of its estimates for other revenue sources, the panel’s estimate for the sales tax was relatively straightforward and based on publicly available data. However, for several reasons, the retail sales tax base might increase at a rate different from the historical average growth rate that the panel assumed in developing its estimate. These reasons include future population or income growth that differs from such growth in the past; increased retail sales through the Internet; and decisions by state and local governments to apply the sales tax to some previously untaxed purchases, or to stop applying it to some currently taxed purchases. Previous studies suggest that income tax revenues are more susceptible to economic fluctuations than property or fuel tax revenues because income varies more over the course of the business cycle than do property values or fuel purchases. The variability in income tax revenue is about the same as for sales tax revenue, but random variations are larger. Unlike the sales tax discussed above, which is more stable with a broader base, tax revenues from personal income are more stable under the payroll tax, or when the tax base is limited to wage income. Economic studies indicate that there are some large differences in the fluctuation of income tax revenues due to changes in economic conditions at the state level. Two studies provide conflicting estimates of the relative volatility of income tax revenues in Virginia and Maryland, with one study suggesting more volatility in Virginia and the other suggesting more volatility in Maryland. These differences at the state level indicate the need for caution in generalizing from state or national studies because smaller jurisdictions, such as cities or counties, might also differ substantially in measures of revenue stability. In terms of long-run revenue growth, previous studies have consistently indicated that income or payroll tax revenues more than keep pace with overall economic growth. Income tax revenues grow faster than income levels because of the progressive nature of most income taxes: People with higher incomes typically pay a larger percentage of their income in taxes than those with lower incomes. This progressivity occurs because of graduated tax rates that get higher as incomes grow and deductions and credits that are often phased out at higher income levels. Payroll tax revenues may not increase as much as income tax revenues due to economic growth because a payroll tax might not have graduated tax rates. A study of individual states found that evidence from Maryland and Virginia is consistent with the broader observation that income tax revenues generally rise faster than income levels, with the long-run growth rate in tax revenues for an equal growth rate in income larger in Virginia. Collecting a specified amount of dedicated revenue from an income or payroll tax generally requires a relatively small tax rate because the base to which that rate would be applied is relatively large. A lower tax rate would be needed for an income tax because the tax base would include both nonwage and wage income, while the payroll tax base would include only wage income. Exempting some income from tax—such as by putting a cap on the amount of wage income subject to a payroll tax, as is done for Social Security, or allowing some form of deduction for income up to some level—would raise the tax rate required on the remaining income because the base would be smaller. The Metro Funding Panel estimated that a payroll tax rate of 16 cents per $100 of wages earned by residents of the WMATA Compact region, with wages below $15,000 per year and above $100,000 per year exempt, would be required to collect $148 million in 2010. According to a panel staff member who participated in developing these estimates, the payroll tax estimate was the most complex to develop. Because the panel derived this estimated tax rate from an estimate of the tax base that itself was derived from Census Bureau data on income that included nonwage income, the tax rate may be lower than the rate that would be needed to raise the same amount of revenue from a tax that applied only to wage income. On the other hand, the estimated tax base did not include any income earned by nonresidents of the Compact region. If such income could be taxed, then the tax base could be higher, which would allow the same amount of revenue to be collected with a lower tax rate. Previous studies indicate that motor fuels tax revenues exhibit the highest degree of stability in the presence of economic fluctuations compared with property, income, and sales taxes. The revenues are more stable because in the short run fuel purchases do not change much in response to changing economic conditions. However, the literature indicates that fuel tax revenues have the most severe random fluctuations, such as those due to natural disasters or other events that disrupt the supply of oil. In terms of long-run growth, studies have found that motor vehicle fuel revenues have historically grown more slowly than general measures of economic growth, but not as slowly as sales tax revenues. Future long-run adequacy concerns remain because of potential fuel efficiency improvements and increased transit use resulting from rising fuel prices and congestion. This concern is exacerbated because motor fuel taxes are generally applied on a per-gallon basis, not as a percentage of the total sale price. Under this structure, revenues are proportional to fuel consumption, not total fuel expenditures, which may require that the motor fuels tax rate be increased over time if revenues are to keep pace with the demand for transit expenditures in periods of high inflation. Collecting a specified amount of dedicated revenue from a tax on retail purchases of motor vehicle fuel requires a relatively large tax rate because the base to which that rate would be applied is relatively small compared with, for example, the base for a general sales tax on retail purchases. If for various policy reasons some fuel purchases are exempt from the tax, then the required tax rate on the remaining fuel purchases would be even higher. The Metro Funding Panel estimated that a motor vehicle fuel tax rate of 11.1 cents per gallon of motor vehicle fuel purchases within the WMATA Compact region would be required to collect $148 million in 2010. However, uncertainty about some of the assumptions underlying this estimate may make it less reliable than the panel’s more straightforward estimates for some of the other revenue sources. For example, this estimate is based on an assumption that average fuel efficiency does not change throughout the period analyzed—until 2015. However, if fuel efficiency improves in response to high fuel prices, then the number of gallons purchased will be less than the panel estimated and the tax rate required would be higher than the panel estimated. In addition, the panel’s estimate of the number of gallons of fuel purchased in the Compact region in the baseline period is based on an estimate from the Metropolitan Washington Council of Governments on the number of vehicle miles traveled within the Compact region. Using vehicle miles traveled introduces uncertainty in an estimate of fuel purchases because some driving in the Compact region is done by vehicles that were filled up with fuel outside the region, while some fuel purchased within the Compact region was used in cars driven outside the Compact region, and these two influences might not be completely offsetting. Previous studies suggest that property tax revenues are moderately susceptible to economic fluctuations, but generally less so than sales and income/payroll taxes, because assessed property values tend to vary less over the course of the business cycle than do retail sales or incomes. Fluctuations in property tax revenues due to changes in economic conditions are generally more predictable than those of other revenue sources because there is often a lag between changes in economic conditions and their effects on property tax revenues. This lag occurs because it often takes a while for changes in property values to be reflected in property assessments. However, this advantage in predictability is only captured using more sophisticated forecasting techniques that take into account economic indicators from the recent past. In addition, random fluctuations in property tax revenues are relatively small. The evidence from previous studies on the long-run revenue growth of property tax revenues is inconclusive. Studies indicate that revenues exhibit widely variant long-run growth patterns at the county level, sometimes increasing faster than income and sometimes more slowly. Researchers have provided evidence suggesting that these large local disparities are generated by differing local economic conditions and implementation structures. Collecting a specified amount of dedicated revenue from a property tax generally requires a relatively small tax rate because the base to which that rate would be applied is relatively large. The Metro Funding Panel estimated that a property tax rate of 3.44 cents per 100 dollars of assessed value in the WMATA Compact region would be required to collect $148 million in 2010. Compared with some of its estimates for other revenue sources, the panel’s estimate for the property tax was relatively straightforward and based on publicly available data, as was the sales tax estimate. However, long-run property value growth might differ from the growth rate in the past, which could cause the required tax rate to differ from the panel’s estimate. Access fees are not as widely used as the previously discussed revenue sources, and the economic literature on the characteristics of access fees is sparse. Intuition suggests that revenues would likely be stable in the face of economic fluctuations if the fee rate were set on a per-square-foot basis, unless the property around a Metrorail station was relatively undeveloped and significant building was taking place or expected to occur. In this instance, rapid short-run growth in revenues would be expected until the development was completed. Although access fee revenues would likely be relatively stable, long-run revenue growth would be limited if the fee rate were applied per square foot and remained the same over time. Revenue growth would only occur to the extent that taxable space increased and would likely be minimal, or even negative, in real dollars if the rate is not indexed for inflation. However, to the extent that revenue growth is due to increased development near stations, there might be a link between revenue growth and the increased demand for transit expenditures. Collecting a specified amount of dedicated revenue from an access fee generally requires a fee rate that, especially compared with a property tax, is large relative to the assessed value of the property, because the base is much narrower than that of a general property tax. Many details could determine the required fee rate, such as the radius of the area around a rail station within which properties would be subject to an access fee. The Metro Funding Panel estimated that an annual transit access fee rate of 30 cents per square foot of federal and commercial property within 0.5 miles of designated Metrorail stations would be required to collect $148 million in 2010. The panel derived this estimate from data on the square footage of federal property and all commercial and hotel space within 0.5 miles of 63 Metrorail stations. If an access fee were in place, it might apply to additional categories of property not included in the estimated tax base, which would lower the required fee rate. However, if federal properties were not subject to the access fee, the required fee rate would be higher. Revenues from vehicle registration fees are also likely to be relatively stable from year to year. In addition, their response to economic fluctuations is likely to lag because car ownership rates are not likely to vary much over the course of the business cycle, and any variation that might occur is likely to occur after a downturn, rather than during it. Long-run growth in vehicle registration fee revenues is unlikely to keep pace with economic growth. Car ownership rates are already so high that higher household income is unlikely to lead to a proportionate increase in the number of cars owned per household (for example, a doubling of average income levels is unlikely to lead to double the number of cars per household). However, longer-term increases are possible in areas with high sustained levels of population growth and, therefore, vehicle ownership growth. This revenue source might provide less long-run adequacy for funding transit than those previously discussed because revenues over the longer term may change inversely with changes in the demand for transit expenditures. For instance, policy changes, increasing fuel prices, and increasing road congestion might lead households to use transit more and own fewer vehicles, causing the demand for transit to increase while revenues from vehicle registration fees are decreasing. Collecting a specified amount of dedicated revenue from motor vehicle registration fees requires a relatively large fee rate because the base to which that rate would be applied is relatively small compared with sales, property and income taxes. If, for policy reasons, some types of motor vehicles were exempted from the fee, then the required fee rate would be even larger. The Metro Funding Panel did not evaluate motor vehicle registration fees as a funding source for dedicated revenues for WMATA and thus did not estimate the fee rate required to collect any specified amount or revenue. We identified administrative cost, equity, and efficiency as key considerations in raising additional revenue on the basis of discussions with state and local public finance experts and public officials and a review of the relevant economics literature. Administrative cost includes the cost of collecting, enforcing, and remitting the additional revenue in addition to the compliance burden (e.g., out-of-pocket expenses for record keeping and time) placed on taxpayers and those paying fees. Additional administrative costs are likely to be large if revenue is increased by implementing a new tax or fee and relatively small for an increase in a tax or fee rate for a revenue source currently in place at the appropriate level (e.g., state or locality). Economists often assess equity according to two principles: Ability to pay principle. Those who are more capable of bearing the burden (usually those with higher income levels) of taxes or fees should pay more in taxes and fees than those with a lesser ability to pay. A tax or fee rate structure is generally thought to be more equitable if it is consistent with this principle. Some tax or fee rate structures are also progressive—that is, the tax or fee liability as a percentage of income increases as income increases. Benefit principle. Those who pay for a service are the same individuals benefiting from the service. Efficiency can be measured in different ways, but economists commonly use two concepts to evaluate the efficiency of a revenue structure: Economic behavioral distortions. This term refers to changes in individual decision making due to incentives in the tax or fee system that move the economy away from its most efficient outcome. Distortions are likely to be smaller when a tax or fee is applied to a broad base (both jurisdiction—who is taxed; and range—what is taxed) and rates do not differ significantly across neighboring jurisdictions. Accountability. Those benefiting from a service pay the full social cost of the service. If the beneficiaries do not have to bear the full cost, they may seek to have the government provide more of the service even when additional amounts of the service cost more than the value of the additional benefits provided, which would be inefficient. Although the concept that those who benefit from a service should pay for it is similar to the benefit principle for assessing equity, in discussing the effects of adherence to or deviation from this principle on efficiency we are concerned with the accountability it provides rather than the fairness. Our analysis suggests that if there are substantial differences in administrative costs among revenue sources selected to address a WMATA funding shortfall, these differences may be more important than equity and efficiency effects, particularly if the current formula for allocating local contributions to fund WMATA is retained and jurisdictions are allowed to choose their own revenue sources. There can be substantial differences in the equity and efficiency effects among the different revenue sources when they are being used to finance state and local government as a whole, and for at least some of the potential revenue sources we analyzed, these effects have been well studied in the economics literature. However, differences in the effects associated with funding a WMATA shortfall are likely to be much smaller because the increase in revenue needed is small compared with the revenue raised to fund overall state and local government operations. Moreover, equity and efficiency effects are sometimes difficult to measure, and there is a lack of consensus in the economics literature regarding the equity and efficiency implications for several of the revenue sources discussed below. In contrast, differences in administrative cost among revenue sources can be easier to identify and, therefore, more likely to affect decision making. These costs include items such as computer systems, forms, and collection devices, as well as the time spent by government employees and the individuals paying the tax or fee. However, if the state and local jurisdictions served by WMATA implement a regionwide tax or fee—an approach proposed by the Metro Funding Panel but which does not have strong support among the Compact jurisdictions—then there could be substantial additional administrative costs as well as effects on equity and efficiency. Administrative costs might be high because no regional collection mechanism is already in place and implementation would require the coordination of collection and enforcement measures across multiple state and local jurisdictions. Equity and efficiency effects are also likely to be greater with the implementation of a regionwide tax because it would change the interjurisdictional allocation of WMATA payments for the shortfall. Changes in equity and efficiency would likely be even larger if a regional tax or fee were used to fund the entirety of state and local WMATA payments, not just the shortfall, because of the additional revenue involved. Administrative costs associated with collecting additional revenue from a sales tax are likely to be relatively low, especially when compared with those associated with access fees and fuel taxes. Sales taxes are one of the two main funding sources at the state level (along with income taxes) and are often used to generate revenue at the local level and in special service districts; thus, tax collection procedures already exist in many places. Administrative costs could be more substantial if jurisdictions are faced with new collection requirements, such as implementing a local option tax where one does not already exist. In terms of equity based on the ability-to-pay principle, economists have traditionally viewed the sales tax as regressive (although less so when food purchases at grocery stores are excluded from taxation); those with lower income levels pay a higher percentage of their income in sales tax than those with higher income levels. However, more recent analyses have identified some factors that suggest that sales taxes may be closer to proportional and less regressive than previously believed. One factor is the economic incidence of the sales tax, or who actually bears the burden of a revenue source. In taxation, the individuals who bear the burden of a tax may or may not be the same individuals who remit the revenue to the government. For example, when a sales tax is added to a product, retailers remit the revenue to the government but they may or may not actually be bearing the burden of the tax. Retailers may leave the price of the product unchanged and simply add the sales tax to the price, in which case the consumer pays the full amount of the tax. Retailers might also reduce the price of the product by the amount of the tax so as not to lose sales, in which case the retailer bears the burden of the tax. Another possibility is that the price of the product might fall, but not by the full amount of the tax, in which case retailers and consumers share the burden of the tax. Traditional analyses of the sales tax have generally assumed that consumers bear the full burden of the tax, but more recent analyses have questioned that assumption. If the burden is borne in part by retailers, then the sales tax may be less regressive than previously believed. Another factor is the definition of income used in measuring progressivity or regressivity. Traditional analyses that have found the sales tax to be regressive have used annual income levels as the measure of income. More recent research has shown that lifetime income might be a more relevant measure of income as long as there are not severe constraints on an individual’s ability to borrow. Using lifetime income, the sales tax appears roughly proportional. That is, people with varying levels of income spend approximately the same percentage of their lifetime income on consumption. In terms of efficiency, evidence from theoretical and empirical studies suggests that the sales tax is distortionary in that it alters individuals’ decisions about where and what to purchase. The sales tax diverts purchases from taxed items toward untaxed or lightly taxed alternatives (e.g., leisure, services, Internet sales, and retail in neighboring jurisdictions). However, the increase in the sales tax needed to collect the revenue associated with funding a WMATA shortfall is likely to be small enough to generate only minor changes in efficiency. The biggest distortions are likely to occur for purchases of items for which consumers are sensitive to small changes in the price of these items, which might happen if there are untaxed or lightly taxed alternatives that are close substitutes. With respect to the benefit principle of equity and the accountability component of efficiency, the sales tax roughly matches the users of WMATA’s services with the costs of those services to the extent that all local residents benefit from transit, and visitors to the Washington, D.C, region, who pay sales taxes while they are in that area, are also likely to use the services. However, from an equity perspective, the adherence to the benefit principle is limited because funding a shortfall with a sales tax does not guarantee that those who receive greater benefits pay more tax; that is, a sales tax is not well targeted toward transit beneficiaries. From an efficiency perspective, the link with accountability is weakened because heavy users of the transit system may advocate investment beyond the economically efficient level because they might not have to bear as large a share of the costs compared with the share of the benefits they would receive. The administrative cost associated with collecting additional revenue from a personal income tax could be relatively low if it is collected at the state level as part of existing state income taxes. However, a local income tax might create significant compliance costs for employers and individuals if it is accompanied by new forms and record-keeping requirements. A regional, state, or local payroll tax might also generate significant compliance costs for employers (in the case of a payroll tax collected at the employer level) if it requires additional record keeping and submitting revenues to a new source. Regarding the ability-to-pay principle for equity, the economics literature has reached a general consensus that the burden of the tax is likely borne by employees in the case of payroll taxes and in proportion to income in the case of income taxes. The economic evidence suggests that employees probably bear most of the burden of a payroll tax through lower wages, even when legislation requires employers to pay half of the tax liability. As lower-income households rely more heavily on wage and salary income, the payroll tax is generally regressive, particularly at the bottom part of the income distribution, but the tax will be less regressive if there is a minimum threshold of wages for paying the tax and more regressive if there is a cap on the amount of income to which the tax applies. It is generally accepted in the economics literature that income tax liabilities are borne by individuals who remit the tax to the government; that is, the tax is not shifted to other individuals. Most income taxes are structured to be progressive: People with higher incomes typically pay a larger percentage of their income in taxes than those with lower incomes. Thus, the income tax is generally thought to be consistent with ability-to-pay principles. However, an income tax can be made regressive, proportional, or progressive depending on the tax rate structure and the distribution of deductions and credits. Income and payroll taxes have unclear efficiency implications with respect to behavioral distortions because previous studies have not yielded a consensus on the degree to which they create distortions. Generally, payroll or income taxes that alter decisions about whether to work, how many hours to work, and how hard to work can cause distortions in the economy, but estimates of the behavioral responses to income taxes vary widely. However, there is some consensus that within households, income taxes are more likely to affect the work decisions of a secondary earner. With respect to the benefit principle of equity and the accountability perspective of efficiency, payroll taxes are likely to be better targeted toward beneficiaries than are personal income taxes. Like a sales tax, an income tax roughly matches the benefits of WMATA’s services to the cost of those services, to the extent that all local residents benefit from transit, although, unlike a sales tax, an income tax does not directly collect revenue from visitors to the Washington, D.C., region who might also benefit from WMATA. However, from an equity perspective, the adherence to the benefit principle is limited because funding a shortfall with a personal income tax does not guarantee that those who receive greater benefits pay more tax. From an efficiency perspective, the link with accountability is weakened because heavy users of the transit system may advocate investment beyond the economically efficient level because they might not have to bear as large a share of the costs compared with the share of the benefits they would receive. A payroll tax is likely to be better targeted to transit beneficiaries because some groups of people who would be affected by a personal income tax but not a payroll tax, such as retirees, might be less likely to benefit substantially from further investment in mass transit than workers. Targeting of a payroll tax could be enhanced if it applied to all of those who work in the WMATA Compact region; income taxes are likely to apply to those who live in the Compact region, which would leave out those who work in the region and benefit from transit even though they live elsewhere (and would include those who live in the region but work outside the region and might not benefit as much from transit). The administrative cost associated with collecting additional revenue from a motor vehicle fuel tax may be substantial if some portion of this tax applied only within the WMATA Compact region and was collected at the retail level. Typically, motor vehicle fuel taxes are collected at the distributor level, including in the District of Columbia, Maryland, and Virginia. Collection at the retailer level would most likely involve additional record keeping for retailers and added costs for local jurisdictions, including setting up a revenue collection procedure, developing standards for record keeping, and enforcing compliance with the tax. The equity implications with respect to ability to pay for the motor vehicle fuel tax are uncertain. Studies indicate that higher-income households own more cars and drive more total miles, suggesting that they will pay more in motor vehicle fuel taxes than lower-income households, but it is uncertain whether this larger amount of tax paid will represent a larger or smaller share of household income. Given that rates of automobile ownership are fairly high at all income levels beyond the very lowest, the motor vehicle fuel tax may be regressive throughout much of the income range. In addition, higher fuel costs increase the cost of travel and of transporting goods. This added cost is more likely to be reflected in the prices of goods for which the demand is relatively unresponsive to changes in prices, or necessities such as food, clothing and prescription drugs, rather than in the prices of goods and services that are considered to be more luxury items. As lower-income households spend a larger portion of their incomes on necessities, this effect of the motor vehicle fuel tax would be expected to be regressive. The motor vehicle fuel tax has an ambiguous effect on efficiency with respect to behavioral distortions from a conceptual viewpoint, and there is too little empirical evidence to arrive at a conclusion. Motor vehicle fuel tax increases within a region decrease efficiency to the extent that they lead drivers to waste resources traveling to service stations outside the region to find lower prices. However, this loss in efficiency might be partially or fully offset if the tax increase makes the total fuel price closer to the full social cost imposed by driving (including the cost of the fuel as well as the inconvenience imposed on others due to congestion and pollution). The motor vehicle fuel tax may be less equitable with respect to the benefit principle and less efficient from an accountability perspective than a sales or personal income tax. Because of differences in car ownership and driving patterns that are unrelated to income, there is likely to be more variance at any income level in the burden of a motor vehicle fuel tax than with a sales or personal income tax, so that even if the benefits of transit accrue to the population as a whole, there is weaker targeting of the tax toward beneficiaries with a motor vehicle fuel tax. Furthermore, when considering specific transit benefits, the link between transit beneficiaries and those who pay the motor vehicle fuel tax is likely to be weak. However, there is a clear link for automobile commuters and others driving at peak times because they benefit from reduced congestion. In contrast, those who drive at nonpeak times and those who do not drive near the transit corridors also pay the motor vehicle fuel tax while receiving little or no benefit. In addition, transit users who do not own motor vehicles will not directly pay any of the tax, although they could be among the largest beneficiaries. As transit users and businesses near transit lines, not automobile commuters, are likely to be the largest beneficiaries of transit services, they may advocate investment beyond the economically efficient level because they might not have to bear as large a share of the costs compared with the share of the benefits they would receive. The administrative cost associated with collecting additional revenue from a property tax may be the lowest of the revenue sources we have analyzed. Property taxes are the main funding source for local governments, so tax collection procedures are already in place. Administrative cost would be greater if local jurisdictions tried to uniformly implement all or a portion of a regionwide property tax because they are likely to have different administrative procedures, including how and when assessments are made and the relationship between market value and assessed value. The equity effects of an increase in a property tax are uncertain because property taxes generally represent a combination of a land tax and a tax on the structures on the land, and the incidence of those two taxes varies. In addition, there are different views on the incidence of property taxes. The traditional view of the property tax suggests that the portion of the tax that applies to land value is likely borne by land owners, making the tax progressive because higher proportions of land are owned by higher- income individuals. However, the portion that applies to structures is likely borne by those who consume the services of the structures— including residents of owner-occupied housing and renters—and previous studies suggest that this portion is proportional or regressive, depending on the measure of income used. Thus, the overall effect is ambiguous. The new view of the property tax suggests that the burden of the property tax is borne by all capital owners. Assuming that capital ownership rises with income, this view suggests that the property tax may be progressive. With respect to efficiency pertaining to behavioral distortions, property taxes taken by themselves might be considered inefficient because they lead to less investment in structures. However, when the effects of other taxes are considered as well, increases in property tax might enhance efficiency. Because the favorable income tax treatment of investment in housing creates incentives for investment in housing beyond the efficient level, raising the property tax could partially offset these incentives and increase efficiency. With respect to the benefit principle of equity and the accountability component of efficiency, the property tax roughly matches the beneficiaries of WMATA service with its cost to the extent that all property values are enhanced by the provision of WMATA’s services. However, from an equity perspective, the adherence to the benefit principle is limited because funding a shortfall with a property tax increases the tax paid by all property owners, while some property owners would receive most of the benefits. That is, like a sales or personal income tax, a property tax is not well targeted toward beneficiaries, although it may be better targeted than those other taxes to the extent that higher property taxes are collected from owners of properties for which the value has risen over the years due to nearby transit service. Similarly, the link with accountability is weakened because heavy beneficiaries of the transit system, including owners of property with good transit access, may advocate investment beyond the economically efficient level because they might not have to bear as large a share of the costs compared with the share of the benefits they would receive. The administrative cost associated with collecting additional revenue from access fees is likely to be substantial, perhaps larger than for any of the other revenue sources that we analyzed. The use of access fees would likely involve significant additional administrative cost because local governments would have to develop a new system for implementation, collection, and enforcement. In addition, there would be an increased compliance burden on owners of commercial property located near Metrorail stations because record-keeping requirements would increase. The equity effects of access fees are uncertain because of uncertainty about the incidence of these fees. The burden might be split among property owners, renters, employees, and consumers, depending on the ability of property owners to shift the tax burden to others through price and wage changes, and the economics literature does not contain sufficient empirical evidence to draw conclusions about how much of the burden would fall on each group. There is also little existing evidence on the efficiency of access fees with respect to behavioral distortions, although economic reasoning suggests that there might be some small efficiency losses. Access fees increase the cost of developing land near transit stations. To the extent that fees are paid out of profits or windfall gains (due to increases in property values) and do not alter decisions on where to build, there are no efficiency effects. However, if an access fee renders an otherwise profitable venture unprofitable, it creates inefficiency by discouraging development around transit services. With respect to the benefit principle of equity and the accountability component of efficiency, access fees are most closely targeted to the beneficiaries of transit service. Those who own property, live, and work near transit services are most likely to draw large benefits from the system and would likely bear a large portion of an access fee. This close connection between beneficiaries and costs would lead to increased efficiency, as there would be no incentive to advocate investment beyond the efficient level because the costs would largely fall on the beneficiaries. The administrative cost associated with collecting additional revenue from motor vehicle registration fees is likely to be relatively low, especially if the increase in revenue is achieved by just increasing the amount of the fee already collected. Some complexity might be added if the additional revenue is collected at a jurisdiction level not currently imposing a registration fee. In either case, there will likely be little or no increase in individual compliance costs, as there would likely be no additional record- keeping requirement. Compliance costs would increase if vehicle owners were required to make an additional trip or travel to a different location to pay the registration fee. The equity implications with respect to ability to pay for vehicle registration fees depend on the differences in vehicle ownership rates among income groups and the structure of the fee schedule, such as whether it is a flat fee per vehicle or a fee rate that is based on the value of the vehicle. Studies indicate that higher-income households own more cars, suggesting that they will pay more in vehicle registration fees than lower-income households, but it is uncertain whether this larger amount of tax paid will represent a larger or smaller share of household income. Households owning no vehicles tend to have lower incomes, and those households would pay nothing in vehicle registration fees. However, given that rates of vehicle ownership are fairly high at all income levels beyond the very lowest, throughout much of the income range, a flat vehicle registration fee may be regressive. On the other hand, a vehicle registration fee that is applied to the value of the vehicle is likely to be less regressive than a flat fee because the average value of vehicles owned is higher for higher-income households. Nonetheless, there is empirical evidence on the equity effects of vehicle property taxes, which resemble registration fees based on vehicle value, that these taxes are regressive. With respect to efficiency pertaining to behavioral distortions, vehicle registration fees make owning a vehicle more expensive, and an increase in a flat fee would be expected to reduce the level of vehicle ownership, although the reduction would likely be minimal. There are negative effects on efficiency resulting from vehicle owners not facing the full costs that their vehicle use places on others (including the inconvenience and health effects imposed on others due to congestion and pollution). These negative effects could be mitigated to some extent by the reduction in vehicle ownership brought about by additional fees. However, this efficiency gain may be small because a fee imposed at the vehicle registration level only minimally discourages vehicle ownership and does nothing to increase the cost of driving the vehicle on any given trip. Thus, a fee increase would provide no incentives for reduction in the number of trips taken by individuals who do own vehicles. Like a motor vehicle fuel tax, a vehicle registration fee may be less equitable with respect to the benefit principle and less efficient from an accountability perspective than a sales or personal income tax. Because of differences in vehicle ownership that are unrelated to income, there is likely to be more variance at any income level in the burden of a vehicle registration fee than with a sales or personal income tax, so that even if the benefits of transit accrue to the population as a whole, there is weaker targeting of the cost burden toward beneficiaries with a vehicle registration fee. Furthermore, when considering specific transit benefits, the link between transit beneficiaries and those who pay the vehicle registration fee is likely to be weak. Automobile commuters and others driving at peak times benefit from reduced congestion, so for them there is a clear link. However, those who drive at nonpeak times and those who do not drive near the transit corridors also pay the vehicle registration fee while receiving little or no benefit. In addition, transit users who do not own motor vehicles will not directly pay any of the fee, although they could be among the largest beneficiaries. As transit users and businesses near transit lines, not automobile commuters, are likely to be the largest beneficiaries of transit services, they may advocate investment beyond the economically efficient level because they might not have to bear as large a share of the costs compared with the share of the benefits they would receive. In addition to the individual named above, Rita Grieco, Assistant Director; Mark Bondo; Christine Bonham; Jay Cherlow; Elizabeth Eisenstadt; Edda Emmanuelli-Perez; Tami Gurley; Heather Halliwell; Maureen Luna-Long; Susan Michal-Smith; SaraAnn Moessbauer; Josh Ormond; Katie Schmidt; Tina Sherman; Albert Sim; James White; Earl Christopher Woodard; and James Wozny made key contributions to this report.
|
A regional panel estimated that the Washington Metropolitan Area Transit Authority (WMATA)--Washington, D.C.'s, transit system--will have total budgetary shortfalls of $2.4 billion over 10 years. The panel and others have noted that WMATA's lack of a significant dedicated revenue source may affect its ability to keep the system in good working order. Proposed federal legislation would make $1.5 billion available to WMATA if the local governments established dedicated funding. This report addresses (1) the characteristics of dedicated funding and its effects on transit agencies and governments; (2) how potential revenue sources compare in terms of stability, adequacy, and other factors; (3) major actions needed to establish dedicated funding for WMATA and the progress made to date; and (4) issues that dedicated funding poses for the region and WMATA. To address these issues, GAO reviewed financial data for the nation's 25 largest transit agencies, interviewed officials from 6 transit agencies and from the state and local governments that support WMATA, and reviewed literature on the financing of mass transit. GAO provided a draft of this report to WMATA and the Department of Transportation for review. Officials from these agencies provided technical clarifications that were incorporated in the report, as appropriate. Dedicated funding, an important source of revenue for many transit agencies, is described by the Federal Transit Administration (FTA) as a specific revenue source--such as a sales or gas tax--that is designated to be used for transit and is not subject to appropriations. According to data transit agencies report to FTA, 23 of the 25 largest transit agencies have dedicated funding, although the transit agencies GAO spoke with vary in the extent to which their dedicated funding corresponds to FTA's description. Most transit agencies with dedicated funding receive such funding from multiple sources and use it on both operations and capital expenses. Generally, dedicated funding is subject to the same oversight as other expenditures and is viewed by transit agencies as having a positive effect on their financial health, particularly with regard to long-range planning. However, dedicated funding has potential drawbacks: For example, it is vulnerable to economic cycles, and it limits the budgetary flexibility of state and local governments. Selecting a dedicated funding source for WMATA involves consideration of the funding source's year-to-year stability and its longer-run adequacy. For state and local governments, another consideration is the political feasibility of the tax or fee rate required to collect a specified amount of revenue from a particular funding source. Revenue sources that GAO analyzed--the sales tax, payroll or income tax, motor vehicle fuels tax, property tax, access fees, and vehicle registration fees--have different characteristics when assessed using these considerations. If governments increase their overall tax and fee revenues to provide additional funding for WMATA, there may be equity, efficiency, and administrative cost issues for their tax systems. To establish dedicated funding and conform to the requirements of the proposed federal legislation, WMATA's supporting jurisdictions would need to enact separate legislation to direct a specific revenue source to WMATA and to amend the WMATA Compact. As of April 2006, legislation to dedicate a portion of sales tax revenues to WMATA had been enacted in the District of Columbia, but neither Maryland nor Virginia had enacted comparable legislation. The only jurisdiction to introduce a bill to amend the Compact has been Maryland, and this legislation was later withdrawn. The District of Columbia and Virginia have not begun steps to amend the Compact. The federal government and the jurisdictions that support WMATA will need to resolve several issues should they choose to provide WMATA with dedicated funding, including (1) the proportion of the jurisdictions' payments to WMATA that come from dedicated funding and how to mitigate its risks; (2) whether dedicated funding will result in a net increase in payments to WMATA and how the size of each jurisdiction's payment will be determined; (3) whether dedicated funding should be used for operations, capital expenditures, or both; and (4) whether increased oversight of WMATA is needed to ensure dedicated funds are properly accounted for.
|
The Mifeprex NDA provided for the use of Mifeprex, in combination with another drug, for the medical termination of pregnancy. The treatment regimen described in the NDA involved taking Mifeprex orally, and then taking the drug misoprostol orally 2 days later unless termination of the pregnancy had already occurred. Patients return for a follow-up visit with their prescribing physician 2 weeks later to ensure that the termination of the pregnancy has been completed. The treatment regimen works by both interrupting the hormones that the body needs to maintain a pregnancy and inducing the uterine cramping necessary to cause a medical abortion. At the time that the drug sponsor submitted the Mifeprex NDA, in March 1996, mifepristone had already been approved in multiple countries. The drug was first approved for the medical termination of pregnancy in France and China in 1988. It was approved subsequently in the United Kingdom in 1991, in Sweden in 1992, and various other European countries throughout the 1990s. In general, the treatment regimens approved in these countries were similar to those studied in the Mifeprex NDA, though in some cases the specific drug used in combination with mifepristone was different. FDA reviews drug applications to determine whether they provide sufficient evidence to demonstrate that a drug is safe and effective for the proposed use, including whether the benefits of the drug outweigh its risks. FDA’s formal process for new drug approval begins after a drug sponsor submits an application, typically following a long period of research and development. During a preliminary review, FDA determines whether the application is sufficiently complete to be reviewed and if so, designates it for either standard or priority review, depending on the therapeutic potential of the drug. The agency then assigns a team of reviewers—including medical officers, chemists, statisticians, microbiologists, pharmacologists, and other experts—within the relevant FDA review division. This review team, which is usually led by a medical officer, conducts a comprehensive evaluation of the clinical and non- clinical information in the application including the safety and efficacy data for the drug, the design and quality of the studies used to support the application, and the proposed labeling for the drug and also reviews the results of inspections of the facilities where the drug is manufactured. The review team compiles the results of its analyses and recommends either an approval, approvable, or not approvable action. FDA managers, usually including the review team’s supervisor and senior management within the applicable review division, determine what action to take on an application, based on the recommendations of the review team. These managers examine the review team’s analysis and individually decide whether to concur with the recommendation. The final decision on the action the agency should take is usually, but not always, made by the director of the applicable review division. In some cases, actions must be reviewed and agreed to by the relevant FDA office. This review process may span several cycles. For those applications not approved during the first review cycle—both approvable and not approvable—the second FDA review cycle begins once the sponsor submits an amendment to the application providing responses to the deficiencies FDA identified in its previous review. These amendments often contain additional studies, analyses, data, or clarifying information to address FDA’s concerns. The responsible review team reviews the information provided by the sponsor, conducts any additional analyses that are required, reviews the results of any additional inspections that have been conducted, and again recommends either an approval, approvable, or not approvable action. As with the first review cycle, the process ends once FDA management reviews the recommendations of the review team and makes its decision on the action to take on the application. To address concerns FDA identifies regarding the safe use of a drug, the agency may condition approval by requiring that the sponsor agree to restrict the drug’s distribution. FDA has established restricted distribution programs for approved drugs primarily by requiring that a drug’s approval be under the restricted distribution provision of Subpart H regulations. According to the scope of the regulations, Subpart H applies to new drugs that “have been studied for their safety and effectiveness in treating serious or life-threatening illnesses and that provide meaningful therapeutic benefit to patients over existing treatments” for the condition. FDA may approve a drug under the restricted distribution provision of these regulations if it meets these criteria and the agency concludes that the drug is effective but can be safely used only if distribution or use is restricted. For example, FDA may require that distribution of a drug be limited to certain facilities or physicians with special training. As of February 2007, nine drugs—Actiq, Accutane, Lotronex, Mifeprex, Plenaxis, Revlimid, Thalomid, Tracleer, and Xyrem—had either an NDA or supplemental NDA approved under the restricted distribution provision of Subpart H. For each of the drugs, either during the application review process or based on postmarket data, FDA identified concerns about the safe use of the drug that led the agency to apply Subpart H. The drugs were approved to treat a range of conditions, such as breakthrough cancer pain, specific symptoms of narcolepsy, and severe acne. FDA has also required that drug sponsors agree to restrict the distribution of drugs without imposing Subpart H. Clozaril, Tikosyn, and Trovan are three examples of drugs that have restricted distribution programs that were imposed outside of Subpart H. (See app. I for a table describing drugs FDA has approved with restricted distribution programs and the conditions they are intended to treat). While Clozaril was first approved in 1989, FDA imposed distribution restrictions on both Tikosyn and Trovan after Subpart H regulations had been promulgated. A second approval provision of Subpart H provides FDA with flexibilities that allow the agency to accelerate the approval process for drugs that provide meaningful therapeutic benefits over alternatives for serious or life-threatening illnesses. Specifically, under the provision, FDA may approve a drug on the basis of clinical trials establishing that the drug has an effect on a surrogate endpoint—such as weight gain or reduced occurrence of infections in patients with HIV—that is reasonably likely to predict a clinical benefit or on the basis of an effect on a clinical endpoint other than survival or irreversible morbidity. This allows FDA to approve a drug before measures of effectiveness that would usually be required for approval are available. However, under this approval provision, drug sponsors are ordinarily required to conduct postmarket studies to confirm and further describe the drug’s clinical benefit. As of February 2007, FDA had used this provision to approve 52 drugs, most of which are intended to treat HIV/AIDS or various cancers. Because some risks may not become known until after a drug’s approval and use in a wider segment of the population, FDA has a range of postmarket oversight responsibilities once a drug is approved for marketing in the United States. FDA’s postmarket oversight responsibilities include assessing sponsors’ compliance with requirements for a given drug, such as postmarketing study commitments, adverse event reporting, and restricted distribution requirements. In addition, FDA monitors reported adverse events to assess the postmarket safety of approved drugs and may take action if it develops a concern about a drug’s safety. With regard to postmarketing study commitments, FDA oversees sponsors’ compliance with regulations that require sponsors of all approved drugs to report to FDA annually on their progress in meeting the commitments. FDA requires that sponsors report on the status of these studies in an annual report that also includes updates on the distribution of the drug, labeling changes, clinical literature published on the drug, and the drug’s marketing. FDA designates unfulfilled study commitments as submitted, pending, ongoing, delayed, released, or terminated. FDA also oversees sponsors’ compliance with regulations that require sponsors of all approved drugs to report periodically to FDA on safety information and specific types of adverse events that occur in association with an approved drug. Sponsors must provide in periodic reports (quarterly for the first 3 years after approval and annually thereafter) a narrative summary and analysis of adverse event information. For adverse events that are considered both serious and unexpected, sponsors are required to submit a report—known as a “Postmarketing 15-day Alert Report”—to FDA within 15 calendar days from the time the sponsor was informed of the event. To assess sponsors’ compliance with these adverse event reporting requirements, FDA reviews sponsors’ reports and conducts inspections of the sponsors’ reporting policies and procedures. For drugs approved under the restricted distribution provision of Subpart H, FDA oversees sponsors’ compliance with the restrictions placed on the drugs’ distribution or use. To assess compliance with restrictions, FDA reviews information such as summaries of sponsors’ distribution programs in annual reports and in some cases separate reports required by the agency to provide details and updates on distribution programs. In addition, FDA may conduct inspections of a sponsor’s corporate headquarters, manufacturing sites, or contractors, such as specialty distributors, to evaluate whether distribution policies and procedures comply with the approved restrictions for a given drug. If FDA identifies deficiencies during an inspection, it may issue a formal citation—known as a Form FDA 483. In addition, FDA may communicate less serious findings as written or oral “observations” or “recommendations.” To monitor postmarket safety of approved drugs, FDA reviews clinical literature, routinely evaluates the available data on reported adverse events, and conducts investigations of the nature and patterns of these events. FDA compiles data from sponsor’s reports on adverse events, along with data from voluntary reports submitted to the MedWatch program, in its Adverse Event Reporting System (AERS) database. FDA safety evaluators analyze data from AERS and in the clinical literature to detect signs of potential safety concerns. These evaluations may reveal the need for further studies of a drug or may result in FDA action to ensure the safety of the drug. If FDA identifies problems with a sponsor’s compliance with agency requirements or identifies postmarket safety concerns, the agency can take a range of actions to address the concern and communicate safety information to healthcare providers and the public. For example, FDA may revise the restrictions on a drug’s distribution, request changes to a drug’s labeling, issue patient advisories or public health alerts, or request that a sponsor issue letters to health care providers or pharmacists to alert them to safety concerns. FDA may also issue a regulatory letter citing violations of laws or regulations. Typically, FDA issues a Warning letter for violations that may lead FDA to pursue further enforcement action if not corrected or issues an untitled letter for violations that do not meet this threshold. FDA also has the authority to withdraw a drug’s marketing approval for safety-related and other reasons, although it rarely does so. Additionally, Subpart H regulations establish an expedited process for withdrawing a drug’s marketing approval, in certain circumstances. FDA approved Mifeprex after three review cycles. In its initial review, FDA concluded that reliance on historical controls in three key clinical trials was appropriate and consistent with FDA regulations and that the available data supported the safety and efficacy of the drug. In an approvable letter, FDA notified the sponsor that it needed to provide additional data and more detail on its proposal to restrict the drug’s distribution before an approval decision could be made. A second review cycle began when the sponsor submitted data responding to this letter. The agency issued a second approvable letter after finding that new data confirmed Mifeprex’s safety and efficacy but also that the sponsor needed to revise its distribution plan and address labeling and manufacturing deficiencies. FDA further concluded that the drug was a candidate for approval under Subpart H. In the final review cycle, FDA concluded that the sponsor’s revised distribution plan and other revisions were sufficient to address FDA’s comments. FDA also concluded that Mifeprex met the scope of Subpart H and that approval under the restricted distribution provision of Subpart H was necessary to ensure that only qualified physicians prescribed the drug. On September 28, 2000, FDA approved Mifeprex under the restricted distribution provision of Subpart H with several restrictions and two postmarketing study commitments. (See table 1 for a timeline of key events in the Mifeprex approval process.) FDA’s initial review began when the drug sponsor submitted the Mifeprex NDA in March 1996. After conducting a preliminary review of the NDA, FDA designated the application for priority review, establishing a goal that the agency would issue an action letter within 6 months. FDA’s rationale for the designation was that as the first drug that would be approved for its particular indication, Mifeprex was a therapeutic advance because women using the drug could potentially avoid the risks of surgery and anesthesia involved in a surgical termination of a pregnancy. FDA assigned a team of reviewers within the Division of Reproductive and Urologic Drug Products to review the evidence in the Mifeprex NDA. The key safety and efficacy data in the NDA consisted of three historically controlled clinical trials, two conducted in France and one conducted in the United States. These trials studied the Mifeprex treatment regimen— mifepristone in combination with misoprostol—in a total of more than 4,000 women. At the time the NDA was submitted, the French trials were complete and the U.S. trial was ongoing. As a result, during the first review cycle, the review team analyzed the complete safety and efficacy data from the French clinical trials, but only summary data on serious adverse events from the U.S. clinical trial. FDA reviewers also considered results from other trials conducted in Europe from 1983 through 1996 in which mifepristone was studied either alone or in combination with misoprostol or similar drugs. In addition, the review team considered safety information from extensive postmarketing experience in Europe, including a postmarket safety database containing information on women who had used mifepristone. Lastly, the review team considered the non- clinical data in the application, including data on the drug’s chemistry and manufacturing. In its review of the Mifeprex data, FDA reviewers determined that the reliance on historical controls in the key clinical trials was appropriate and consistent with FDA regulation. According to FDA, historical control designs can make it more difficult to evaluate which effects can be attributed to the drug being studied. However, FDA regulations list historical controls as an acceptable type of control when the natural history of the condition being treated is well-documented and when the effects of the drug are self-evident. In the case of the Mifeprex NDA, FDA determined that the historically controlled trials provided substantial evidence of safety and efficacy because the outcomes of women taking the Mifeprex regimen were compared with the well-documented data on the natural course of pregnancy, including rates of miscarriage, and the effect of the drug—termination of a pregnancy—was obvious. To assist the review team in its assessment of Mifeprex, FDA convened the Reproductive Health Drugs Advisory Committee in July 1996 and asked the members to examine the data and vote on their conclusions regarding the drug’s safety and efficacy. Six of the eight voting members voted, with two abstentions, that the available evidence demonstrated that the benefits of the regimen outweighed its risks for the proposed indication in the United States. However, the members agreed unanimously that FDA should provide the final safety and efficacy data from the U.S. clinical trial for their review. The advisory committee also discussed the basic elements of a voluntary restricted distribution system proposed by the drug’s sponsor, which would require that Mifeprex be distributed directly to physicians, that prescribing physicians meet certain training requirements, and that patients meet certain conditions before receiving the drug. The advisory committee voted unanimously that they agreed with the concept of restricting distribution of the drug but had reservations about how the proposed system would assure that physicians had adequate credentials. The members recommended that the sponsor conduct postmarket studies to address six unanswered questions about the treatment regimen and the distribution system. The members also provided extensive comments on the draft labeling proposed by the sponsor. The FDA review team concluded that the NDA was approvable, based on its assessment of the clinical and non-clinical data and the input from the advisory committee. The medical officer leading the review team concluded that the available clinical data indicated “that medical abortion can be safely delivered in a wide variety of United States settings.” The data from the French trials showed the treatment to be roughly 95 percent effective at terminating pregnancy through 49 days gestation. The data from the French clinical trials also showed that almost all patients experienced some side effects—such as uterine cramping and bleeding— most of which were expected based on the way the drug works. Though serious adverse events were considered rare, some women experienced bleeding that required medical intervention, and approximately 0.2 percent of patients required transfusion. The medical officer concluded that the preliminary U.S. data on adverse events did not appear to differ significantly from the French trials. In September 1996, FDA issued an approvable letter for the use of Mifeprex in combination with the drug misoprostol for the termination of intrauterine pregnancy up to 49 days gestation. In memos documenting concurrence with the review team, and in the approvable letter itself, FDA management outlined the clinical and non-clinical issues the sponsor needed to address prior to approval. First, the full data from the U.S. clinical trial were needed to establish safety and efficacy of the Mifeprex regimen in the U.S. health care setting. Second, FDA agreed with the sponsor’s proposal to limit the drug’s distribution, but the sponsor had not yet submitted sufficient detail on how it would be implemented to allow for the plan to be fully evaluated. Third, the drug labeling proposed by the sponsor needed to be revised to provide more information on the treatment and to address comments from the advisory committee. Fourth, the sponsor would need to commit to pursue the postmarket studies suggested by the advisory committee. Finally, the sponsor would need to address certain deficiencies in chemistry and manufacturing data identified in FDA’s review. FDA’s second review cycle for the Mifeprex NDA officially began once the sponsor had completed its responses to the first approvable letter. However, these responses were delayed because of difficulties the sponsor encountered in securing a manufacturer for the drug product. In the interim, the sponsor submitted a range of data to FDA, including the final safety and efficacy results from the U.S. clinical trial, updated safety data from other trials of mifepristone and international postmarketing experience with the drug, formal revisions of the product labeling, and outstanding chemistry and manufacturing data. In August 1999, the sponsor completed its responses to the approvable letter by submitting an overview of the key principles of the restricted distribution system as well as responses to the postmarketing study commitments. At the time of this submission, the sponsor was still working with its planned distributor on the details of the restricted distribution system. Based on the updated data, the review team recommended approval for the Mifeprex NDA once the sponsor had clarified the details of the drug’s distribution, revised the drug labeling, and addressed deficiencies in the chemistry and manufacturing data. The medical officer concluded that the final results from the U.S. clinical trial were acceptable and confirmed the results of the French trials that the regimen was safe and effective. The medical officer concluded that the comments from the July 1996 advisory committee meeting were fully considered and, to the extent possible, implemented. The medical officer also concluded that additional detail was needed to determine whether the sponsor’s proposed distribution plan was sufficient. The non-clinical reviews during this review cycle— which included inspections of manufacturing facilities—identified deficiencies in the drug’s chemistry data and manufacturing processes that needed to be addressed, as well as sections of the drug’s labeling that needed to be revised. In January 2000, the sponsor submitted a more detailed plan describing how the proposed distribution restrictions would be implemented. The plan had three key elements. First, the Mifeprex regimen would only be administered under the supervision of qualified physicians who had agreed to provide the treatment according to several guidelines. Specifically, prescribing physicians would be required to attest to being able to accurately assess the duration of a pregnancy, diagnose an ectopic pregnancy, and assure that patients have access to appropriate follow up care if needed to manage complications. The physicians would also need to agree to fully explain the procedure to each patient and obtain her signed consent, record the unique product serial number for tracking purposes, and report any serious adverse event or on-going pregnancy to the sponsor. Second, the drug would only be distributed directly to physicians after an authorized distributor had verified that the physician had registered with it and had a signed attestation on file. Third, patients would be required to meet certain conditions before receiving the drug, such as signing a patient agreement attesting to her understanding of the potential complications of the treatment. FDA management concluded that the proposed distribution plan did not provide for adequate training and certification of prescribing physicians and needed to be revised before the NDA could be approved. In February 2000, FDA issued a second approvable letter for Mifeprex, notifying the sponsor that it needed to revise its proposed distribution plan, address deficiencies in the drug’s chemistry data and manufacturing, and revise the drug’s labeling. The letter also stated that FDA had considered the application under the restricted distribution provision of Subpart H and that distribution restrictions would be necessary in order to assure the safe use of the drug. The approvable letter further reminded the sponsor of its commitment to pursue postmarketing study commitments to address questions that were raised at the time of the advisory committee meeting. In March 2000, the sponsor submitted its complete response to FDA’s February 2000 approvable letter. This submission included updated safety data from ongoing trials and international postmarket experience, international product labeling, and revisions to the distribution plan. The sponsor also provided additional data and revisions—including updated chemistry and manufacturing data, a revision to the distribution plan, and revised labeling—to address comments from FDA that arose during the review cycle. The agency’s review of these submissions included multiple meetings and teleconferences with the sponsor and input from a consultant who was a special government employee (SGE) and a member of the Reproductive Health Drugs Advisory Committee. During the final review cycle, FDA’s deliberations—which involved a wide range of agency staff and management, including at times the Commissioner—focused on four key issues: whether prescribing physicians should be required to participate in a formal training and certification program, whether to require that approval be under Subpart H, what conditions of use should be specified, and what postmarketing study commitments would be needed to assure the safe use of the drug. Physician Training: In its deliberations, FDA considered requiring that physicians participate in specific training and have their qualifications certified before being allowed to prescribe Mifeprex, as opposed to relying on the sponsor’s proposed system of self-attestation. However, FDA concluded that such a requirement was not necessary. FDA officials told us that the agency determined that its concern about ensuring that prescribers were adequately qualified could be addressed by requiring that the sponsor make educational materials and training programs readily available and requiring that prescribing physicians sign an agreement attesting to their qualifications. The SGE consultant agreed with this conclusion. FDA officials also told us that the agency wanted to minimize the burden that the restricted distribution program would place on providers and patients by requiring only what was necessary to address safety concerns. In July 2000, the sponsor submitted its revised distribution plan. This plan addressed FDA’s comments by providing increased emphasis in the product labeling on the educational materials and trainings available to physicians and the importance of participating in the training. The other key elements of the plan—including the specific qualifications that physicians were required to meet and agreements regarding discussing the treatment and adverse event reporting—were essentially unchanged from those the sponsor proposed in its January 2000 plan. Approval under Subpart H Regulations: FDA had maintained through the first two review cycles that distribution restrictions would be required for Mifeprex. However, minutes from meetings between FDA and the sponsor indicate that the agency was still considering whether it was necessary to impose those restrictions under Subpart H during the final review cycle. During the second review cycle, FDA had concluded that the restricted distribution provision could be applied to Mifeprex. FDA eventually concluded that it would be necessary to do so. In its documented rationale for this conclusion, FDA stated that the drug met the scope of the regulations because the termination of an unwanted pregnancy is a serious condition, and that the drug provided a meaningful therapeutic benefit over existing therapies by allowing patients to avoid the procedure required with surgical termination of pregnancy. FDA officials told us that the agency has broad discretion to determine which conditions or illnesses may be considered serious or life threatening, and that in the case of Mifeprex it considered the potential in any pregnancy for serious or life- threatening complications—such as hemorrhage—in its determination. Additionally, FDA concluded that Mifeprex could only be used safely if distribution was limited to physicians who could assess the duration of a pregnancy, diagnose an ectopic pregnancy, and provide patients with access to surgical intervention if necessary. Throughout the approval process, the sponsor was opposed to approval under Subpart H. Specifically, the sponsor argued that the drug did not fit within the scope of Subpart H because pregnancy itself is not a serious or life threatening illness. The sponsor also argued that the intent of the restricted distribution provision was to allow for restricted distribution of highly toxic or risky drugs, and that Mifeprex did not fit this description. The sponsor also expressed concern that approving the drug under Subpart H could unfairly mark Mifeprex as risky and deter women from using the drug. Lastly, the sponsor held that imposing Subpart H was unnecessary because it had voluntarily committed to the distribution restrictions requested by FDA. However, in a September 2000 letter to FDA, the sponsor agreed to FDA’s requirement that approval be under Subpart H, while noting that it still believed that applying these regulations to Mifeprex was not appropriate. Conditions of Use: FDA reviewed data and held multiple meetings with the sponsor regarding the specific conditions of use that should be required for Mifeprex. For example, FDA deliberated about whether it was necessary to require that prescribing physicians possess the ability to perform follow-up surgical interventions in the event that it was necessary to manage complications. The sponsor maintained that such a requirement was inconsistent with the practice of medicine, because management of incomplete miscarriages was routinely handled by referring patients to outside providers with specialized surgical or emergency care training. On this issue, FDA concluded that access to follow-up care could be ensured by requiring adequate information in the labeling and requiring that physicians attest to having made arrangements for their patients to have access to any needed surgical or emergency care. The SGE consultant agreed with FDA’s conclusion. FDA disagreed with the sponsor on other suggested conditions of use. For example, the sponsor provided data to support allowing patients to self-administer the misoprostol dose at home, instead of requiring them to return to their prescribing physicians. FDA concluded that the available data did not support the safety of home use of misoprostol and that such use should not be included in the final product label. As a part of its deliberations about the conditions of use, FDA also concluded that approved labeling should include a medication guide to provide patients with information about the risks and benefits of the drug and the approved conditions of use and treatment regimen. Postmarketing Study Commitments: In both the September 1996 and February 2000 approvable letters, FDA had reminded the sponsor of its commitment to conduct a series of six postmarket studies to address comments raised in the 1996 advisory committee meeting. FDA reviewed data and met with the sponsor during the final stages of its review to revisit these commitments in light of experience gained with the treatment regimen since the advisory committee meeting, concerns about potential infringement on the privacy of patients, and the potential resources needed to fulfill all six commitments. FDA concluded that the originally proposed commitments could be sufficiently addressed in two redesigned studies. The first was a study on the safety outcomes of a group of patients receiving the treatment under the care of physicians with surgical intervention skills compared to physicians who refer their patients for surgical intervention when necessary. The second was a surveillance study to determine the outcomes of ongoing pregnancies that were not surgically terminated after a failure of the Mifeprex regimen, including the health of any children born. FDA also concluded that the outstanding questions could be incorporated into the two postmarket studies and an audit of signed patient agreement forms. Once the sponsor had addressed the issues that FDA raised during the third review cycle, both the review team responsible for the Mifeprex NDA and FDA management concluded that the drug should be approved. The medical officer concluded that the updated safety data did not reveal any new issues that would change the ratio of benefit-to-risk for the drug. The medical officer also reviewed revised product labeling related to the distribution of the drug. Based on these reviews, the medical officer recommended approval of the application. The non-clinical reviews during this review cycle included additional inspections of manufacturing facilities. After the sponsor had addressed several issues, including deficiencies identified in a second inspection of the drug manufacturing facilities, the non-clinical reviewers also recommended approval of the application. FDA management concurred with the recommendations of the review team that the Mifeprex NDA should be approved. On September 28, 2000, FDA approved Mifeprex under the restricted distribution provision of Subpart H. The sponsor began distribution of Mifeprex in November 2000. FDA approved the drug with the two postmarketing study commitments discussed above and with several key restrictions on distribution. First, prescribing physicians must sign a prescriber’s agreement attesting to possessing the training and skills needed to administer the treatment regimen, and also agreeing to provide patients with the approved medication guide. They must also attest that they will fully discuss the treatment with patients and report to the sponsor any serious adverse events or ongoing pregnancies that are not terminated after a failure of the Mifeprex regimen. Second, the drug must be distributed directly to prescribing physicians by an authorized distributor only after the distributor has verified that the physician has a signed agreement on file. Third, patients must sign a patient agreement attesting to having read, discussed, and understood the risks and potential complications of the treatment. For a more detailed list of the individual components of the restricted distribution program for Mifeprex, see appendix II. For a copy of the approved prescriber’s agreement, see appendix III. Although each drug had unique risks and benefits, the approval process for Mifeprex was generally consistent with the approval processes for the other eight Subpart H restricted drugs. Each of the drugs had unique risks and benefits that were specific to their indication and target populations. For some of the drugs, the safety issues that prompted FDA to apply Subpart H were similar, with the potential for causing birth defects, the potential for liver or other serious toxicities, and appropriate patient selection being the most common issues. However, there were also safe use concerns that were unique to particular drugs. For example, for Mifeprex, ensuring patient access to follow-up care was a key safety concern, while for Actiq a key concern was ensuring that children did not accidentally ingest the drug. Each of the drugs represented potential advances in the treatment of their targeted condition and in two cases— Mifeprex and Xyrem—the drug was the first approved to treat that condition. (See app. I for a table including each of the Subpart H restricted drugs and their approved indications.) One common element across the approval processes for the Subpart H restricted drugs was that for seven of the drugs, including Mifeprex, FDA needed to evaluate potential limitations in key clinical data supporting the NDA. Specifically, with the exception of Accutane and Lotronex, the drugs were approved on the basis of studies without concurrent controls or data that were limited by relatively small sample sizes or data collection issues. FDA approved the Mifeprex NDA on the basis of historically controlled clinical trials that studied the drug in several thousand patients. FDA concluded that the use of historical controls was not a limitation because the course of pregnancy was well-documented and the effect of the treatment was self-evident. Revlimid, Thalomid, Plenaxis, and Xyrem were also each approved on the basis of data that included at least one key clinical study that lacked a concurrent control. In contrast to the Mifeprex data, FDA concluded that the lack of concurrent controls in these studies was a weakness because data on the course of the disease in a comparable population was not available to be used as a reliable historical control. For example, Thalomid was approved on the basis of clinical trial data from the published literature as well as a series of retrospective case studies for several dozen patients. Additionally, five of the drugs—Actiq, Revlimid, Thalomid, Tracleer, and Xyrem—were approved on the basis of key clinical studies with relatively small sample sizes of several hundred patients or less. Finally, for Actiq, Plenaxis, Thalomid, and Xyrem, FDA identified data collection issues, such as incomplete documentation, in some of the key data sources. Another common element was that for six of the drugs, including Mifeprex, FDA issued at least one prior action letter before ultimately approving the drug for marketing. FDA issued one approvable letter before ultimately approving Thalomid and Tracleer. Both Mifeprex and Xyrem received two approvable letters. In some cases the types of issues FDA cited—such as insufficient safety or efficacy data, the need for additional information on the restricted distribution system, or chemistry and manufacturing issues—were similar. For all four of these drugs, the adequacy of proposed distribution restrictions was a significant issue. For Xyrem, FDA’s initial approvable action was also linked to the sufficiency of the data provided in the application. FDA issued not approvable letters for both Actiq and Plenaxis prior to their eventual approval. In the case of Actiq, FDA cited multiple deficiencies, such as reliance on a key clinical study with flaws and an inadequate plan for risk management. For Plenaxis, FDA initially concluded that the risks of the drug exceeded its benefits because of the potential for severe, systemic allergic reactions in patients. As a result of these complexities, the approval process for the Subpart H restricted drugs was typically longer than the process for other drugs. Across the seven drugs with NDAs approved under Subpart H, an average of almost 25 months elapsed from the time that the sponsor submitted its NDA to the time FDA approved the NDA. The length of time to approval ranged from almost 9 months for Revlimid to more than 54 months for Mifeprex. In comparison, in analyses conducted for our 2006 report on new drug development, we found that it took FDA on average almost 18 months to approve NDAs submitted from 1996 through 2002. We also found that the types of distribution restrictions FDA imposed on Mifeprex were similar to those imposed on the other Subpart H restricted drugs, though the specifics of the restrictions depended on FDA’s safe use concern for the drug. (See table 2.) For all of the drugs except Actiq, FDA required some form of program enrollment or registration process. For example, for Mifeprex and three other drugs, FDA required that patients sign written agreements and that physicians enroll in a prescribing program and attest to their qualifications. For five of the drugs, FDA required formal registries of all prescribing physicians and patients. Additionally, for seven of the drugs, FDA required that distribution be limited to authorized distributors or pharmacies. And for eight of the drugs, FDA required that the sponsor establish a process to ensure that dispensing or distribution of the drug was contingent on verification that physicians and others had enrolled or registered in the distribution program, or that patients had complied with certain safety measures. FDA also required that all of the sponsors implement some form of educational program for patients, prescribers, or pharmacists, though FDA did not require that prescribing physicians participate in formal training for any of the drugs. For six of the nine drugs, FDA required that the sponsor report periodically to the agency specifically on implementation of their restricted distribution programs. For seven of the drugs, FDA required that sponsors report to the agency on specific adverse events—such as fetal exposures or liver toxicity—more frequently than is required for other drugs. In the case of Mifeprex and Xyrem, at the time the drugs were approved, FDA did not require that the sponsors submit additional adverse event reports beyond those required for all approved drugs, but did require that physicians agree to report specific types of adverse events to the sponsor. Finally, eight of the nine Subpart H restricted drugs were approved with two or more postmarketing study commitments. Each of these had at least one commitment that involved developing a postmarket study to monitor adverse events or patient outcomes of interest for that drug. The number of study commitments FDA required ranged from 2 to 10, depending on the drug. Additionally, for most of the drugs, including Mifeprex, the study protocols for the various commitments had not been finalized at the time of approval. The actions FDA has taken to oversee Mifeprex have been consistent with the actions it has taken to oversee the other Subpart H restricted drugs. FDA has relied primarily on information submitted by the sponsors of all the Subpart H restricted drugs and inspections for three of the drugs to oversee compliance with restricted distribution requirements. FDA has also relied on updates submitted by these sponsors to oversee compliance with postmarketing study commitments and has found that most have unfulfilled commitments. To oversee compliance with adverse event reporting requirements, FDA has reviewed a variety of safety information including reports submitted by the sponsors of all nine of the drugs restricted under Subpart H and has conducted inspections to evaluate compliance with reporting of adverse events for eight of the drugs. As a result, for most of the drugs, FDA has identified deficiencies in compliance with adverse event reporting requirements. To oversee reported adverse events FDA has used similar methods—such as monitoring, investigating, and addressing safety concerns—for Mifeprex and the other eight Subpart H restricted drugs. As a result of its oversight of safety data, FDA has identified postmarket safety concerns for most of the drugs and has used a variety of methods to communicate safety information to health care providers and the public. (See table 3 for an overview of FDA’s postmarket oversight of these drugs.) For all nine of the drugs that have been approved under the restricted distribution provision of Subpart H, FDA has relied mainly on information submitted by sponsors in required reports to oversee the sponsors’ compliance with distribution restrictions. For six of the drugs—not including Mifeprex—FDA relied on reports specific to the drugs’ restricted distribution programs. The type of information provided by the sponsors in these documents included data on the operation of the restricted distribution program, such as requirements for distributors, pharmacies, prescribers, and patients participating in the program. In addition, to oversee compliance with the restricted distribution programs for most of the drugs—including Mifeprex—FDA has relied on annual reports, supplemental applications, or periodic reports for required updates on the postmarket use of the drugs, including summaries of updates to the restricted distribution program. Through the end of 2007, FDA had conducted inspections specifically to oversee sponsors’ compliance with distribution restrictions for three of the drugs—Mifeprex, Tracleer, and Xyrem. In the case of Mifeprex, in 2002 FDA conducted routine inspections of two of the drug’s distributors to oversee their compliance with distribution restrictions. FDA inspectors reviewed standard operating procedures and other information in order to oversee adherence to the requirements of the restricted distribution program such as procedures for maintaining signed provider agreements, distributing medication guides with shipments of the drug, and maintaining the physical security of the drug. For one of the inspections of Mifeprex distributors, FDA did not issue a citation. For the other inspection, FDA issued a citation in which the agency cited four inconsistencies between the approved distribution plan and the distributor’s standard operating procedures. For example, FDA cited the distributor for the absence of certain written procedures pertaining to the distribution of the drug. The sponsor responded to this citation, noting that at the time of approval the distribution plan did not require that distributors prepare such written procedures. Other examples of the inconsistencies FDA noted were serial numbers that had not been properly recorded on a shipping label as required for tracking purposes and the requirement that a medication guide be provided with each dose of the drug was not reflected in the written procedures for processing orders. As a result of its 2006 inspection of the Tracleer restricted distribution program, FDA did not issue a formal citation, but provided recommendations to the sponsor. In its 2007 inspection of the Xyrem restricted distribution program, FDA did not identify any specific deficiencies. However, many of the responsibilities for the program are contracted out to a pharmacy, which was not inspected. The inspection report notes that, for that reason, FDA could not verify whether the sponsor had fulfilled the requirements for the drug’s restricted distribution program. For the eight Subpart H restricted drugs approved with postmarketing study commitments, FDA has relied on sponsors’ annual reports for updates on the status of each commitment. FDA’s reviews of these reports are the basis for its determination of the status of each commitment as fulfilled, submitted, pending, ongoing, delayed, released, or terminated. FDA officials told us that the status of postmarketing study commitments for Subpart H drugs is monitored the same way as those commitments for other drugs. In 2008, FDA conducted initial inspections specific to the restricted distribution programs for Accutane, Actiq, and Revlimid. In addition, FDA conducted a second such inspection for the Tracleer program. As of May 13, 2008, the results from these inspections were not available. In February 2007, agency officials told us that they were working to establish a process to conduct regular inspections to oversee sponsors’ compliance with distribution restrictions for Subpart H restricted drugs. Since that time, agency officials told us that FDA had decided to combine the inspection of restricted distribution programs with inspections examining compliance with adverse event reporting requirements. However, agency officials noted in May 2008 that FDA is reevaluating its process for conducting inspections in light of recent legislative changes. Under FDAAA, FDA is required to evaluate, at least annually, for one or more drugs that have elements to assure safe use as part of their REMS, whether those elements assure the safe use of the drug, are not unduly burdensome on patient access, and to the extent practicable minimize the burden on the health care delivery system. 21 U.S.C. § 355-1(f)(5)(B). health care providers perform a surgical abortion with outcomes for patients who are referred to another facility for follow-up care in the event of treatment failure—the sponsor has reported difficulty in enrolling participants into the study. FDA told us that according to the sponsor, the “vast majority of prescribers” can provide surgical abortion services on site. FDA has opted not to terminate the study, and has categorized it as ongoing. FDA officials told us that this gives the agency additional flexibility in the event that provider or practice patterns change over time, making enrollment of study participants more feasible. The sponsor also has reported enrollment challenges in the case of the second study commitment for Mifeprex—to conduct surveillance of ongoing pregnancies following failure of treatment. FDA officials told us that postmarket experience with the drug has shown that most patients opt to have a surgical abortion in the event that the Mifeprex regimen is not successful in terminating the pregnancy. In December 2007, FDA released the sponsor from this commitment because it determined that the study will no longer provide helpful information because of low enrollment. FDA has worked with some of the sponsors of the Subpart H restricted drugs to make adjustments to agreed upon commitments that have not been completed. FDA officials told us that the agency has in some cases made changes to a sponsor’s postmarketing study commitments or requested new commitments in addition to those specified at approval. For example, FDA recommended several additional postmarketing study commitments for Thalomid following the agency’s approval of an expanded indication for the drug. In the case of Tracleer, FDA recommended changes to some of the drug’s study commitments. FDA had not requested additions or changes to the postmarketing study commitments for Mifeprex until the agency released the sponsor from its commitment to conduct surveillance of ongoing pregnancies following failure of treatment. To oversee compliance with adverse event reporting requirements, FDA has both reviewed data submitted by sponsors in required reports and conducted inspections. Sponsor reporting for the drugs has included annual reports in which the sponsor provided a summary of the adverse events reported in the previous year; periodic update reports which inform FDA of adverse events monthly, quarterly, or at some other interval established by FDA; and 15-day alert reports for events that are both serious and unexpected. In addition, in some cases sponsors have agreed or FDA has required them to provide 15-day alert reports for other types of serious adverse events. For example, the sponsor of Mifeprex agreed to provide 15-day alert reports for cases of serious infection and ruptured ectopic pregnancy in women who used the drug, and FDA required the sponsor of Thalomid to report suspected or confirmed pregnancy in women taking that drug. In some cases, including for Mifeprex, FDA specifically documented its assessments of adverse event reporting contained in annual, periodic update, or 15-day alert reports or reports submitted to the AERS database. FDA officials told us that staff review all submitted reports, but do not always document their reviews. In addition to relying on reports submitted by the sponsors, FDA has conducted inspections specifically to oversee the sponsors’ compliance with adverse event reporting requirements for eight of the nine drugs, including Mifeprex. Between 2001 and May 2008, FDA had conducted 19 such inspections with a range of none to four inspections conducted for each drug. In the case of Mifeprex, FDA has conducted three inspections—in 2002, 2004, and 2006—related to adverse event reporting. In these inspections, FDA reviewed a variety of documents pertaining to adverse event reporting for Mifeprex, including standard operating procedures, product labeling, MedWatch reporting forms, 15-day alert reports, complaint file, periodic update reports on adverse events, and annual NDA reports. In addition, FDA documented reviews of samples of the sponsor’s adverse event reports for completeness, accuracy, and timeliness. As a result of the Mifeprex inspections, FDA issued citations for deficiencies related to the accuracy, completeness, or timeliness of some reports as well as for the sponsor’s failure to follow certain procedures for handling some adverse event follow-up activities. In each of the Mifeprex inspections, FDA identified some examples of misclassified reports— events which FDA said should have been submitted as 15-day alert reports rather than in periodic reports. For example, FDA cited the sponsor for not classifying some events resulting in hospitalization as serious events and thus not reporting those events as 15-day alert reports. In another inspection, FDA found that some of the sponsor’s procedures for reporting and following up on adverse events were inadequate or had not been developed. These deficiencies were similar to those FDA found for other drugs, and FDA identified fewer problematic reports for Mifeprex than for some of the other Subpart H restricted drugs. Following each of the inspections for Mifeprex, the sponsor provided a written response to FDA in which it either agreed to address FDA’s findings or noted its disagreement with the deficiencies FDA cited. For example, following the first inspection, the sponsor agreed to address the examples of misclassified or incomplete reporting FDA cited and to reinforce procedures for handling adverse event-related correspondence with its staff. In some cases the sponsor disagreed with FDA’s characterization of a deficiency or presented evidence to refute a claim that it had not complied with a reporting requirement or procedure. As a result of FDA’s inspections for the other seven drugs, the agency issued written citations to six of the sponsors for deficiencies. In addition, FDA noted only “oral observations” for the other sponsor. Similar to the Mifeprex inspections, FDA staff reviewed information such as sponsor documentation and standard operating procedures related to adverse event reporting for the other seven drugs for which it conducted inspections. As it did for the Mifeprex inspections, FDA reviewed samples of adverse event reports for completeness, accuracy, or timeliness for most of the other drugs. As it did with Mifeprex, FDA cited some sponsors for deficiencies such as incomplete or late reporting of adverse events or failure to adhere to certain procedures for reporting. For example, FDA cited the sponsor of Thalomid for failure to submit several reports of serious and unexpected adverse events as a 15-day alert report and for late reporting of some other adverse events that included deaths and hospitalizations. In addition, FDA issued an untitled letter to the sponsor citing its failure to review and submit 82 reports of serious and unexpected adverse events within the required time frame. FDA was not always consistent in how it documented deficiencies in adverse event reporting. In some of its inspections FDA documented the same type of deficiency as a citation while in others it noted them as oral observations or discussion points. For example, FDA did not issue a citation for the sponsor of Tracleer after inspectors noted 52 late 15-day reports—instead discussing the late reports with the sponsor at the close of the inspection. However, in its first inspection of the sponsor for Mifeprex, FDA issued a citation for failure to file a single 15-day report within the required 15 days. FDA also cited the sponsor for 6 late 15-day reports in each of its two subsequent inspections, although the sponsor refuted this finding in written responses following each inspection. As in the case of Mifeprex, sponsors responded to FDA in writing to describe actions they had taken to address deficiencies or to disagree with FDA’s conclusions following an inspection. FDA has used similar methods to oversee postmarket safety—monitoring, investigating, and taking action on emerging safety concerns—for Mifeprex and the other eight Subpart H restricted drugs. For Mifeprex, FDA has routinely reviewed the available information on reported adverse events from sources such as annual reports, periodic update reports, 15-day alerts, and data from its AERS database. Since the time Mifeprex was approved, FDA has documented regular reviews and summarized the available data on adverse event reports to monitor the drug’s safety. FDA believes that, because the distribution system for Mifeprex requires that prescribing physicians agree to report hospitalizations and other serious adverse events, it is unlikely there are significant numbers of these events that are not reported to FDA. However, FDA acknowledges that because the reporting system is voluntary, the agency cannot be certain that they have reports of all serious adverse events. FDA officials have concluded that, with the exception of the cases of fatal infection, the reported serious adverse events associated with Mifeprex have been within or below the ranges expected based upon the medical literature on adverse events following medical abortion. In its May 2006 response to congressional inquiries regarding Mifeprex, FDA stated that the most commonly reported serious adverse events had been blood loss requiring a transfusion, infection, and ectopic pregnancy. FDA estimated that 0.023 percent of U.S. women who had taken Mifeprex have required transfusion, compared to a transfusion rate of 0.15 percent observed in international studies of the drug. FDA also noted that the rate of ectopic pregnancy among U.S. women who had used Mifeprex was 0.005 percent, compared to the overall rate of 1.3 to 2 percent in all U.S. pregnancies. Based on the medical literature, FDA estimated that fewer than 1 percent of patients will develop an infection of any kind following medical abortion with Mifeprex. According to FDA, as of May 2008, among the estimated 915,000 U.S. women who had taken Mifeprex for termination of pregnancy since its approval, the agency was aware of seven deaths that may be related to the use of the drug. Six of the deaths were due to severe infection, and one death involved an undiagnosed ectopic pregnancy. Of the cases involving infection, five of the women were infected with a rare bacterium, Clostridium sordellii, while one woman was infected with the bacterium Clostridium perfringens. With assistance from the Centers for Disease Control and Prevention (CDC) and other outside experts, FDA has investigated all reported infection-related deaths in U.S. women who have taken the Mifeprex regimen for termination of pregnancy. These investigations included requesting the medical records and autopsy reports for each case; evaluating available adverse event data from the United States, the United Kingdom, and the World Health Organization; consulting with scientific experts and health care providers from inside and outside FDA; and microbiological testing to identify the bacterium involved. In addition, FDA evaluated samples from the drug lots of Mifeprex and misoprostol associated with some of the deaths to test for contamination with the bacteria. FDA found that in the six cases of death due to infection, the women used a regimen of Mifeprex and misoprostol that has not been approved by FDA. FDA has stated that it is aware that many health care providers use modified regimens, and while some of the regimens have been described in the medical literature, FDA has not evaluated the safety and effectiveness of any other regimen than the one described in the drug’s approved labeling. To further explore the nature of the infections, FDA initiated an interagency scientific workshop in May 2006 with CDC and the National Institutes of Health entitled “Emerging Clostridial Disease.” These agencies had observed a general increase in the United States in reports of serious clostridial infections including infections in women who had used Mifeprex, that raised questions about Clostridium’s relationship to fatal illness and pregnancy. According to the meeting minutes, participants discussed recent cases of clostridial infection—including those occurring among women who had taken Mifeprex and misoprostol for termination of pregnancy and those who had not—reviewed what was currently known about these infections, and discussed how to conduct surveillance to ensure that cases and trends of clostridial infections are monitored. At the workshop, a CDC official reported on the history of clostridial infections, including a cluster of ten fatal cases reported in the literature between 1977 and 2001 among previously healthy women. Of the ten cases, eight of the women became infected following childbirth, one became infected following a medical abortion, and the other case was unrelated to pregnancy. As a result of its investigative efforts, FDA has concluded that the evidence does not indicate that Mifeprex caused the fatal infections. In response to congressional inquiry, FDA stated that “the nature of the relationship between taking a single dose of the drug and the reported cases of serious infection with a rare bacterium is highly uncertain.” Laboratory testing of samples from the drug lots of Mifeprex and misoprostol associated with some of the deaths due to infection has shown no evidence of contamination with the bacteria. FDA officials have said that the relationship between the infections and the use of unapproved regimens of Mifeprex and misoprostol remains unknown. Some research has suggested that the use of Mifeprex may suppress the immune system which could lead to infection. However, FDA has noted that if this were the case, the agency would expect to see a higher rate of other types of serious infections in patients who had used the drug, which has not been the case. FDA has noted that findings by the CDC and in the medical literature suggest that pregnancy itself—rather than the medication—may be the critical risk factor for women who have become infected with Clostridium sordellii. FDA, working with the drug’s sponsor, has taken a variety of steps—such as issuing warnings and making changes to the product labeling—to address safety concerns for Mifeprex that were identified through postmarket monitoring and investigation. For example, in response to reports of ruptured ectopic pregnancy, FDA developed a questions and answers document about the condition and worked with the drug’s sponsor to alert health care providers and to highlight the importance of careful screening for the condition. In addition, FDA approved a labeling change to provide information about the importance of evaluating patients for ectopic pregnancy. In response to concerns about serious infections and associated deaths—all of which involved an off-label use of the drug— FDA issued Public Health Advisories to notify healthcare providers about patient deaths and the treatment regimens used in those cases, and to remind them of the regimen FDA has approved, and that FDA has not established the safety of alternative regimens. In addition, FDA issued a news release, reviewed letters from the sponsor to health care providers and emergency room directors to alert them to the safety concerns regarding serious infection, and approved changes to product labeling including revisions to the warning to include information about the deaths due to serious infection. FDA also has established a Web site with information about Mifeprex, questions and answers about the drug, and links to other safety-related information. FDA used labeling changes— including updating the medication guide that prescribers agree to discuss with their patients—and information posted on its Web site to remind consumers and health care providers that FDA has not assessed the safety and efficacy of any regimen other than the one approved for the drug and indicated in its labeling. FDA has similarly monitored adverse events for the other Subpart H restricted drugs. As FDA has done with Mifeprex, the agency has documented periodic safety reviews of the available information it had on reported adverse events for all of the other drugs. FDA’s reviews analyzed data on reported adverse events from sources such as annual NDA reporting, periodic update reports, 15-day alerts, and data from the AERS database. Some FDA reviews summarized the available data on a specific type of adverse event—like liver toxicity, or severe bleeding—or adverse events in general, in order to determine whether the data suggest an emerging safety concern for the drug. In addition, in some cases, as it did with Mifeprex, FDA has sought the advice and assistance of other federal agencies and outside experts to investigate serious adverse events. As a result of its monitoring activities, FDA has identified postmarket safety concerns for most of the Subpart H restricted drugs and has taken similar actions to address them. When FDA has found safety concerns related to a Subpart H restricted drug, it has worked with the drug’s sponsor to employ a variety of measures to ensure the drug’s safe use. These have included adding or strengthening a warning on the label, issuing a Public Health Advisory, and sending letters to health care providers to alert them to a safety risk. FDA has approved safety-related labeling changes, such as boxed warnings, for eight of the nine drugs. In the case of four of the drugs, including Mifeprex, the agency issued a Public Health Advisory or Safety Alert. The sponsors of five of the drugs including Mifeprex sent a letter to health care providers who prescribe (or may prescribe) the drug to alert them of safety concerns or to communicate new information regarding the drug. For example, in the case of Tracleer, adverse event reports revealed an increased risk of liver damage in patients who were treated with the drug. As a result, FDA and the sponsor notified health care providers of the risk by issuing a Safety Alert, highlighting the need for continued monitoring of liver function in patients using the drug. The sponsor added a boxed warning about potential liver injury to the labeling and issued a letter to health care providers to alert them to the potential risk. In general, the actions FDA took in response to safety concerns were similar across all of the drugs. We provided HHS with a draft of this report for review. HHS informed us that it did not have general comments on the draft report. In addition, HHS provided technical comments, which we incorporated as appropriate. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to others who are interested and make copies available to others who request them. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Application type (year first approved under Subpart H) Severe recalcitrant nodular acne. Supplemental NDA (2005) Management of breakthrough cancer pain in patients with malignancies who are already receiving and who are tolerant to opioid therapy. NDA (1998) Severe diarrhea predominant irritable bowel syndrome (IBS) in women who have: chronic IBS symptoms (generally lasting 6 months or longer), had anatomic or biochemical abnormalities of the gastrointestinal tract excluded, and failed to respond to conventional therapy. Supplemental NDA (2002) Medical termination of intrauterine pregnancy through 49 days’ pregnancy. NDA (2000) Palliative treatment of men with advanced symptomatic prostate cancer, with specified risks or symptoms. NDA (2003) Treatment of a limited subset of patients with transfusion dependent anemia. NDA (2005) Treatment of multiple myeloma patients who have received at least one prior therapy. Acute treatment of cutaneous manifestations of moderate to severe erythema nodosum leprosum (ENL) and as maintenance therapy for prevention and suppression of the cutaneous manifestations of ENL recurrences. NDA (1998) Newly diagnosed multiple myeloma. Pulmonary arterial hypertension. NDA (2001) Cataplexy associated with narcolepsy. NDA (2002) Application type (year first approved) Management of severely ill schizophrenic patients who fail to respond adequately to standard drug treatment for schizophrenia. NDA (1989) Irregular heartbeats (atrial fibrillation and atrial flutter). NDA (1999) Serious, life- or limb-threatening infections in an inpatient healthcare setting. n/a (1997) FDA approved Mifeprex with the following specific restrictions on distribution: Mifeprex must be provided by or under the supervision of a physician who possesses adequate qualifications and agrees to provide the treatment according to several guidelines. To accomplish this, the system required that prescribing physicians register with an authorized distributor by providing a signed Prescriber’s Agreement attesting to the following: Possesses the ability to assess the duration of pregnancy accurately. Possesses the ability to diagnose ectopic pregnancies. Possesses the ability to provide surgical intervention in cases of incomplete abortion or severe bleeding, or has made plans to provide such care through other qualified physicians, and are able to assure patient access to medical facilities equipped to provide blood transfusions and resuscitation, if necessary. Has read and understood the prescribing information about Mifeprex. Will provide each patient with a medication guide and fully explain the procedure to each patient, provide her with a copy of the medication guide and Patient Agreement, give her an opportunity to read and discuss both the medication guide and the Patient Agreement, obtain her signature on the Patient Agreement and sign it as well. Will notify the sponsor or its designate in writing as discussed in the Package Insert under the heading DOSAGE AND ADMINISTRATION in the event of an ongoing pregnancy, which is not terminated subsequent to the conclusion of the treatment procedure. Will report any hospitalization, transfusion or other serious events to the sponsor or its designate. Will record the Mifeprex package serial number in each patient’s record. Provisions for the physical security of the drug during distribution such as Direct distribution of the drug through select authorized distributors to physicians who have signed the Prescriber’s Agreement, which includes providing their medical license number. Distributors are required to ensure that the physician is registered before distributing the drug. Secure manufacturing, receiving, distribution, shipping, and return procedures, including unique serial numbers on packaging and tamper- proof seals. In addition to the contact named above, Martin T. Gahart, Assistant Director; Jill Center; Chad Davenport; and Cathy Hamann made key contributions to this report. Julian Klazkin also contributed.
|
In September 2000, the Food and Drug Administration (FDA), part of the Department of Health and Human Services (HHS), approved the drug Mifeprex for use in terminating early term pregnancy. FDA approved the drug under a provision of its Subpart H regulations, allowing it to restrict the drug's distribution to assure its safe use. Critics have questioned aspects of the Mifeprex approval process, including the reliance on historically-controlled clinical trials that compare a drug's effects on a condition to the known course of the condition rather than to another drug or placebo. Critics argued that Mifeprex does not fit within the scope of Subpart H, which applies to drugs that treat serious or life-threatening illnesses. Concerns have also been raised about FDA's oversight of the drug since approval, including the agency's response to deaths in U.S. women who had taken the drug. In this report GAO (1) describes FDA's approval of Mifeprex, including the evidence considered and the restrictions placed on its distribution; (2) compares the Mifeprex approval process to the approval processes for other Subpart H restricted drugs; and (3) compares FDA's postmarket oversight of Mifeprex to its oversight of other Subpart H restricted drugs. GAO reviewed FDA regulations, policies, and records pertaining to its approval and oversight of Mifeprex and the eight other Subpart H restricted drugs. In addition, GAO interviewed FDA officials and external stakeholders. FDA approved Mifeprex after evaluating the sponsor's initial and revised new drug application through three review cycles. In the first cycle, FDA concluded that the available data supported the safety and efficacy of Mifeprex and that, because the course of pregnancy was well-documented and the effects of the drug were self-evident, the use of historical controls was consistent with FDA regulations. FDA also concluded that before the drug could be approved, the sponsor needed to provide final data from an ongoing U.S. trial, and more detail on restricting the drug's distribution. In the second cycle, FDA concluded that while the U.S. trial data confirmed the drug's safety and efficacy, the sponsor needed to revise its distribution plan and address labeling and manufacturing deficiencies. In the final review, FDA concluded that termination of unwanted pregnancy is a serious condition and imposing restrictions under Subpart H was necessary. FDA approved Mifeprex, but required that the sponsor commit to conduct two postmarketing studies, imposed several distribution restrictions intended to ensure that only qualified physicians prescribe the drug, and required that patients attest to understanding the treatment's potential complications. The approval process for Mifeprex was consistent with the processes for the other Subpart H restricted drugs, although the details of FDA's approval depended on the unique risks and benefits of each drug. Common elements of the approval processes included that FDA needed to evaluate potential limitations in key clinical data (Mifeprex and six of the other drugs), did not approve the drugs in the first review cycle (Mifeprex and five others), and imposed similar types of distribution restrictions on Mifeprex and the other drugs, though the specific details of the restrictions varied across the drugs. FDA's postmarket oversight of Mifeprex has been consistent with its oversight of other Subpart H restricted drugs. To oversee compliance with distribution restrictions, FDA has reviewed data from all sponsors and conducted inspections for Mifeprex and two other drugs. To oversee compliance with postmarketing study commitments, FDA has relied on required updates from sponsors and found unfulfilled commitments for most drugs, including Mifeprex. To oversee compliance with adverse event reporting requirements, FDA has evaluated data in sponsors' reports and, for Mifeprex and seven other drugs, has conducted inspections that revealed deficiencies for most of these drugs, including Mifeprex. Lastly, FDA has taken similar steps to oversee postmarket safety across the drugs, such as analyzing adverse events. For Mifeprex, FDA investigated the deaths of six U.S. women who developed a severe infection after taking the drug and concluded that the evidence did not establish a causal relationship between Mifeprex and the infections. Finally, FDA has taken similar actions to address emerging safety concerns across the drugs, such as changing labeling. HHS reviewed a draft of this report and informed GAO that it did not have comments.
|
In 1988, the City of Denver agreed with Adams County to acquire a 53-square-mile site for a new airport, to be built to replace Stapleton International Airport. At that time, in a conceptual estimate, the cost of the airport was set at $1.34 billion. In May 1989, voters in Denver approved a plan to build Denver International Airport. Site preparation and construction began in September 1989. The first formal construction budget, set at $2.08 billion, was produced in May 1990. Financing for DIA has included about $508 million from the Federal Aviation Administration (FAA) in grants and facilities and equipment funds, and about $3.8 billion in bonds sold to the public. Since May 1990, 12 airport revenue bond sales have been completed, with the most recent sale of $329.3 million of bonds in June 1995. Funds from the June 1995 sale are primarily designated for refinancing bonds sold in 1984 and 1985. Following the June 1995 bond sale, the City of Denver reported senior bonds payable totaling $3.481 billion plus subordinate bonds payable totaling $300 million. Each bond sale for DIA has been promoted by an Official Statement issued by the City of Denver containing details on the terms and conditions of the bond sale, a description of the airport project, financial and operational statistics and projections, contractual agreements with airlines, and information on risks and litigation. Appended to each official statement are (1) a report of the airport consultant, presently Leigh Fisher Associates (formerly the airport consulting practice of KPMG Peat Marwick) and (2) audited financial statements for the Denver Airport System, presently audited by Deloitte & Touche LLP. The information in these official statements is presently the subject of an SEC investigation and several lawsuits. The Denver office of the SEC is conducting an investigation to assess whether Denver made adequate disclosures of the problems with the airport baggage system. In addition, five lawsuits have been filed on behalf of investors in Denver Airport Bonds, alleging that they were not properly informed of the risks associated with their investments. DIA has attracted enormous local and national media attention, much of it focused on the various investigations that have been conducted on the airport. In addition to the work being done by the SEC, several other reviews and investigations have been undertaken, including a Federal Bureau of Investigation inquiry into contracting practices, the Department of Transportation Inspector General’s review of the possible misapplication of airport revenues, and the Denver District Attorney’s investigations of contracting and construction practices. To determine amounts and causes of cost growth in the DIA project, we reviewed construction budgets and cost reports; interviewed officials in DIA’s construction division to obtain explanations of reasons for certain scope changes in the project; examined change orders to construction contracts; and reviewed official statements issued by the City of Denver on the DIA project to identify disclosures made by Denver on construction cost increases. To reconcile annual debt service liabilities and total bonds payable from audited financial statements as of December 31, 1993, to the Leigh Fisher Associates report issued by Denver for the September 1994 bond sale, we reviewed these two reports in detail; reviewed audit workpapers prepared by Deloitte & Touche to document the methods they used to compute annual debt service and bonds payable; interviewed officials at DIA’s finance office and obtained explanations of methods used in computing debt service amounts in the Leigh Fisher Associates report; held discussions with DIA’s financial consultant, Leigh Fisher Associates, and obtained and reviewed detailed supporting schedules prepared by them; and reviewed DIA’s Plan of Finance prepared by First Albany Corporation, DIA’s bond financing consultant. Reconciliations of differences between the reports were prepared for us by DIA finance officials, and we traced the details of these reconciliations to financial records at DIA’s finance office. To address the issue of SEC jurisdiction over municipal bonds and the status and scope of the SEC investigation at DIA, we met with SEC officials at SEC Headquarters in Washington, D.C., and held discussions with SEC investigators at their Denver office. We reviewed testimony given by SEC’s Chairman before Senate and House Committees in January 1995 to obtain SEC’s formal position relative to its jurisdiction over the municipal bond markets. We also reviewed SEC’s legal foundation for jurisdiction over municipal financing and compared federal securities laws to Colorado securities laws. Our reviews of documentation noted above and our discussions with officials cited are the basis for the statements made in this report. We did not complete an investigation or a comprehensive audit of the information we are reporting. Readers of this report should be aware that investigations now under way by the SEC and others could conceivably disclose additional details that could conflict with information presented in this report. We requested comments on a draft of this report from the Director of Aviation, Denver International Airport, of the City of Denver, who provided us with written comments. In his comments, reprinted in appendix I, the Director did not disagree with the facts in this report but provided additional rationale for why the cost of completing the airport increased. The total cost of DIA is about $4.8 billion, about $3 billion of which are construction costs incurred by the City of Denver. Other major cost categories are $915 million in capitalized interest; $599 million in costs of facilities paid for by airlines, FAA, and rental car companies; and $261 million for land acquisition and project planning. Construction costs grew from a May 1990 budget of $2.08 billion to a total at airport opening of $3.004 billion, resulting from several substantial scope changes in the project. One major scope change was the decision in 1991 to build an automated baggage system costing about $290 million in direct construction costs, but which ultimately delayed the opening of DIA by about 16 months. This 16-month delay increased capitalized interest for the project by about $300 million. The earliest firm cost estimate for constructing DIA, excluding land acquisition and project planning, was $2.08 billion, and was contained in the City’s Official Statement for the May 1990 bond issue. In June 1991, the City entered into an agreement with United Airlines which included, among other things, the City agreeing to design and construct Concourse B in accordance with United’s facilities requirements. By February 1992, the construction estimate was up to $2.7 billion, driven up largely by the agreement with United Airlines. This $620 million construction cost increase resulted from widening and lengthening concourses ($250 million); the initial costs for the automated baggage system ($200 million); and other changes including completion of the terminal, electronic upgrades, apron improvements, and partial grading of a sixth runway ($170 million). By February 1994, DIA construction cost estimates had risen another $220 million, raising the total to $2.92 billion. The largest single factor in this round of cost increases was a decision to move the cargo area from the north side of DIA to the south side, primarily to satisfy the demand by cargo carriers for better access to Interstate 70. This cargo area move cost about $59 million. The balance was primarily for numerous airport improvements made under agreements with United and Continental Airlines, additional airport fire and maintenance equipment, a commuter airline fueling facility, and upgraded lighting to conform to new FAA regulations. At the date of DIA’s opening, February 28, 1995, construction costs totaled $3.004 billion, about $80 million over the February 1994 amount. This $80 million was principally for modifications to the automated baggage system and for a back-up baggage system. In addition to growth in DIA construction costs, delays caused by problems with the automated baggage system cost an additional $300 million in capitalized interest. Capitalized interest is similar to construction interest on a home building project. Before ground is broken, the borrower signs for a construction loan. As months pass during construction of the home, interest is charged on the construction loan. If a project runs over by several months, thousands of dollars of additional interest costs are absorbed into the cost of the home. In the case of DIA, about $300 million was absorbed into the cost of the project due to the 16-month delay in opening the airport because of problems with the baggage system. All told, capitalized interest for the entire construction period was $915 million. Your office compared data in Leigh Fisher Associates’ report supporting the September 1994 bond sale to Denver Airport System’s financial statements as of December 31, 1993, and raised two questions. First, annual amounts payable on bond debt were lower in the Leigh Fisher Associates report compared to the audited financial statements by $69 million to $118 million a year for the years 1995 through 2000. Second, total bond debt was lower in the Leigh Fisher Associates report ($3.464 billion), than in the audited financial statements, adjusted for the September 1994 bond sale ($3.872 billion). It is important to note that these two financial reports, while closely related, had different purposes and covered different time periods and scopes. The financial statements were audited as of December 31, 1993, and were designed to present the financial position at that date of the Denver Airport System, including both DIA and Stapleton, in accordance with generally accepted accounting principles. The Leigh Fisher Associates report was prepared as of August 18, 1994, and was designed to present financial forecasts for 1995 through 2000 for DIA based on certain assumptions about future events. Annual debt service requirements in the audited financial statements were based on the legal liabilities that existed on each of Denver’s bond issues at the financial statement date. Annual debt service amounts, $69 million to $118 million a year lower, were reported in the Leigh Fisher Associates report based on certain assumptions about future events including (1) successful refinancing of the 1984/85 bonds, (2) prepayment of certain bonds with the proceeds of FAA grants, and (3) lower than maximum interest rates on variable rate bonds. Two of these assumptions have been realized: (1) bonds were refinanced in June at 5.7 percent interest and (2) interest of about 5 percent has been paid on variable rate bonds during 1995. Another primary reason for lower annual debt service amounts in the Leigh Fisher Associates report was the assumption that estimated passenger facility charge (PFC) revenues would be used to reduce debt service amounts. During its first 3 months of operations, DIA collected PFCs at amounts meeting or exceeding projections. Figure 2 and associated notes provide a detailed reconciliation and further explanation of the reasons for differences in annual debt service amounts reported in the audited financial statements dated December 31, 1993, and the annual debt service amounts reported in the Leigh Fisher Associates report dated August 18, 1994. Total Denver Airport System bonds payable, on the December 31, 1993, audited financial statements as adjusted for the September 1994 bond sale, were $3,871,950,000. (See figure 3). Bonds payable reported in Exhibit B of the Leigh Fisher Associates report totaled $3,464,019,000. Information in these financial reports differed because the financial statements included all debt of the Denver Airport System (including DIA and Stapleton debt), whereas the Leigh Fisher Associates report was using Exhibit B to present only those bonds that provided funds to cover DIA construction and capitalized interest costs. Figure 3 and its accompanying notes present details on the differences between the two financial reports. While municipal securities are exempt from the registration requirements and civil liability provisions of the Securities Acts of 1933 and 1934, they are not exempt from the antifraud provisions of those acts. When allegations of fraud associated with a municipal bond issue are made, the SEC, at its discretion, may launch an investigation, as it has in the case of DIA. The SEC is currently investigating DIA’s disclosures of information related to baggage system issues, to include all Official Statements and supporting documentation covering the period 1990 to the present. The SEC has not released any information on the results of its work because its investigation is ongoing. In response to your request for information on the potential applicability of the SEC’s Rule S-X to DIA revenue bonds, we reviewed Rule S-X and met with SEC officials to discuss their application of Rule S-X and its companion, Rule S-K. These are the primary criteria SEC uses in regulating issuers of corporate bonds, but they are not requirements imposed on issuers of municipal bonds. Rule S-X covers the form and content of financial statements and requires that a corporate bond prospectus include 2 years of audited balance sheets and 3 years of audited income statements and cash flow statements. Rule S-K covers qualitative issues in a bond prospectus such as adequacy of disclosures, legal matters, and corporate general management issues. SEC officials told us that their review of corporate debt issuances applies a standard of whether disclosures were made in good faith on a reasonable basis when they were made. Further, this standard is applied principally to those disclosures of a material nature that could reasonably be presumed to affect an investor’s decision. Also, omission of material information is an important consideration. SEC officials emphasized that it is not possible to speculate if SEC jurisdiction over approval of DIA Official Statements would have resulted in different disclosures. The market for municipal securities has been largely unregulated at the federal level, basically due to broad exemptions in both the Securities Act of 1933 and the Securities Exchange Act of 1934. However, some changes began to occur in the 1970s in response to abusive practices by dealers in municipal securities and to increasing numbers of retail investors in this market. The Securities Acts Amendments of 1975 established a limited regulatory scheme for the municipal securities market through provisions for the mandatory registration of municipal securities brokers and dealers. Other actions taken by SEC in recent years have strengthened its stance on the quality of disclosures demanded of municipal bond issuers. SEC adopted Exchange Act Rule 15c2-12, requiring underwriters to obtain and review issuers’ Official Statements prior to selling bonds, and to provide copies of Official Statements to customers. SEC published a Staff Report on the Municipal Securities Market which underscored the need for improved disclosure practices in the primary and secondary municipal securities markets. SEC published the Statement of the Commission Regarding Disclosure Obligations of Municipal Securities Issuers and Others wherein it formalized its position regarding obligations of municipal securities issuers under the antifraud provisions of federal securities laws. Further, this document emphasized the importance of using audited financial statements and established procedures for disclosing material events subsequent to the initial offering. In response to your request, we compared the 1933 Securities Act’s and the 1934 Securities Exchange Act’s standard of liability for professionals involved with the preparation and issuance of Official Statements with standards imposed on professionals by Colorado law in the same regard. We found that Colorado, like a majority of the states, has substantially adopted section 101 of the Uniform Securities Act as a basic fraud provision. The antifraud provision in Colorado’s statute mirrors the federal antifraud provisions. Both make it unlawful for any person, in connection with the offer, sale, or purchase of any security, directly or indirectly, to defraud or “to make any untrue statement of a material fact or to omit to state a material fact necessary in order to make the statements made, in light of the circumstances under which they are made, not misleading.” In addition, we note that with respect to corporate, as opposed to municipal securities, section 11 of the 1933 act, as well as Colorado law, makes accountants civilly liable for material misstatements or omissions in corporate registration statements. Further, the SEC may bar any professional from appearing or practicing before it if the Commission finds that the professional has willfully violated any provision of the securities law, including both the antifraud provisions and the prohibition on material misstatements. We performed our work between January and July 1995 in accordance with generally accepted auditing standards. We have discussed the contents of this report with officials of the City of Denver and they agree with its contents. Written comments from the Director of Aviation, DIA, of the City of Denver, are included in appendix I. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send copies of this report to the Secretary of Transportation; the Director, Office of Management and Budget; the City of Denver; and interested congressional committees. We will also make copies available to others on request. Please contact me at (202) 512-9542 if you or our staff have any questions concerning this report. Major contributors to this report are listed in appendix II. Thomas H. Armstrong, Assistant General Counsel Barbara Timmerman, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed selected financial issues relating to the Denver International Airport (DIA), focusing on: (1) DIA construction cost growth; (2) differences between the DIA financial consultant's report and audited financial statements relating to the Denver Airport System's bond debt; and (3) Securities and Exchange Commission (SEC) jurisdiction over municipal bonds and the status and scope of its DIA investigation. GAO found that: (1) DIA construction costs increased from an estimated $2.08 billion in May 1990 to $3.004 billion in February 1995; (2) the cost increases were due to changes in the scope of DIA and capitalized interest increased by $300 million due to the delay in DIA opening; (3) the two financial reports on DIA differed mainly due to their different purposes and the different time periods and scopes covered; (4) the financial statements covered both DIA and the Stapleton International Airport, while the consultant's report presented financial forecasts only for DIA based on certain assumptions about future events; (5) the differences in annual bond debt payments reflected the consultant's assumption that certain bonds would be refinanced in 1995, bond principal would be prepaid, lower interest rates would be paid on variable rate bonds, and passenger facility charges would be used to reduce annual debt service amounts; (6) the audited financial statements included all airport system debts while the consultant's report included only DIA construction bond debt; (7) municipal bonds are exempt from securities registration requirements and civil liability provisions, but they are subject to antifraud provisions; and (8) SEC is investigating DIA disclosures of its baggage system issues under its antifraud authority.
|
A paid preparer is any person who prepares for compensation, or who employs one or more persons to prepare for compensation, all or a substantial portion of a tax return or claim for refund of tax. Paid preparers prepared almost 60 percent of all federal tax returns filed in 2008 and 2009. IRS does not know how many paid preparers there are but estimates there are between 900,000 and 1.2 million. Prior to the new requirements for paid preparers there were no national standards that a paid preparer was required to satisfy before being compensated for preparing a federal tax return. Currently, attorneys, certified public accountants (CPA), enrolled agents (EA), enrolled actuaries, enrolled retirement plan agents, and other individuals authorized to practice before IRS are subject to standards of practice under Department of the Treasury Circular No. 230. Most EAs are required to pass an examination and complete annual continuing education, while attorneys and CPAs are licensed by states but are still subject to Circular 230 standards of practice if they practice before IRS. Previously, other paid preparers were not regulated, required to pass a competency examination, complete continuing education, or adhere to the standards of practice in Circular 230. The states of Oregon, California, New York, and Maryland all regulate paid preparers, but oversight in each state varies. IRS has noted that the lack of uniform federal regulation of all paid preparers has resulted in greatly varying oversight of paid preparers depending on the paid preparer’s professional affiliations, or lack thereof, and the geographical area in which they practice. In previous work, we and the Treasury Inspector General for Tax Administration (TIGTA) found that some paid preparers made significant errors preparing tax returns and we recommended that IRS conduct research to determine the extent to which paid preparers file accurate and complete tax returns. We also recommended that IRS develop a plan to require stricter regulation of paid preparers and suggested Congress adopt a nationwide paid preparer regulatory regime similar to Oregon’s paid preparer regulatory regime if it judged that Oregon’s regulatory regime accounted for at least a modest portion of the higher federal tax return accuracy in the state. In June 2009, the Commissioner of Internal Revenue initiated a review of paid preparers to help IRS strengthen its partnerships with paid preparers and ensure that paid preparers adhere to applicable professional standards and follow tax laws. IRS recommended changes to the oversight of paid preparers in its December 2009 Return Preparer Review report. These recommended changes included mandatory registration for paid preparers who are required to sign a federal tax return; competency testing and continuing education for paid preparers who are required to register with IRS and who are not attorneys, CPAs, or EAs; and holding all paid preparers to Circular 230 standards of practice, regardless of whether or not the preparers are required to sign a federal tax return. IRS intends these new requirements to improve service to taxpayers, increase confidence in the tax system, and increase taxpayer compliance. IRS has implemented a requirement that paid preparers obtain a PTIN if they prepare all or substantially all of a tax return filed after December 31, 2010. Figure 1 shows IRS’s tentative schedule for implementing the other new requirements. In addition to the requirements shown in figure 1, IRS will require all paid preparers to adhere to Circular 230 standards of practice. Revisions to the Circular 230 regulations are currently being reviewed by the Office of Management and Budget (OMB), according to an official involved in the implementation of the new requirements. When the revisions to the Circular 230 regulations have been finalized, paid preparers will be required to adhere to its standards of practice. The dates for implementing the competency testing and continuing education requirements are tentative because OMB is currently reviewing the proposed revisions to the Circular 230 regulations. Because these proposed regulations are not final, IRS has not decided how it will implement some details of the competency testing and continuing education requirements. Nevertheless, the RPO Director discussed with us his thoughts on approaches IRS might take. Paid preparers may register for a PTIN online or on paper via Form W-12, IRS Paid Preparer Tax Identification Number (PTIN) Application. Paid preparers who currently have a PTIN must register in the new P registration system but in most cases can retain their old PTIN as long as IRS can verify identifying information for the existing PTIN. Online registrants are supposed to receive a provisional PTIN immediately while paper registrants are supposed to receive a provisional PTIN in 4-6 weeks. As of March 20, 2011, according to an IRS official involved in implementing the new requirements, IRS had issued 692,297 PTINs, approximately 60 percent of which were issued to paid preparers with existing PTINs and approximately 40 percent of which were issued to paid preparers without existing PTINs. Most, but not all, paid preparers were able to obtain a PTIN online. According to an official involved in the implementation of the new requirements, approximately 92 percent of paid preparers who attempted to obtain PTINs by the start of the filing season got them online. The rest either attempted to obtain a PTIN by paper or were directed to obtain a PTIN by paper, likely as a result of an online authentication issue. Officials and members of multiple paid preparer organizations stated that some preparers have encountered technical problems when using the PTIN registration system but also noted that IRS’s administration of the PTIN registration system has improved. The RPO Director said that IRS has worked to address problems with the registration system since it was initiated. For example, married paid preparers with different last names from their spouses who filed tax returns under the married filing jointly status were experiencing difficulty obtaining a new PTIN. The RPO director said that IRS solved this problem. For the 2011 tax filing season, IRS will allow paid preparers who are able to demonstrate a good faith effort to obtain a PTIN, but were unsuccessful, to use their old PTINs or Social Security numbers on tax returns. When applying for a PTIN, paid preparers are asked to self-disclose if they are compliant with their personal and business taxes, under penalty of perjury. The RPO Director said that IRS plans to initiate automated tax compliance checks on all paid preparers. IRS plans to limit the checks to whether the preparers have filed all federal tax returns and paid or entered into an agreement to pay federal tax debts. Paid preparers are also asked if they have been convicted of a felony in the past 10 years, under penalty of perjury. The PTIN application includes space to write an explanation for both tax compliance and felony information. The RPO Director said that IRS plans to check the accuracy of registrants’ tax compliance and background information by late 2011 and that registrants who provide false information on their PTIN applications will have severely limited appeal rights if IRS proposes to deny them PTINs. Paid preparers who are attorneys, CPAs, or EAs are asked to self-identify their professional credentials. The RPO Director said that IRS does not have a single-source database through which it can verify these professional credentials (it only has information on EAs). IRS plans to sample randomly attorneys and CPAs for verification of their self identification, and the RPO Director said that IRS is working toward developing a database that contains information about attorneys and CPAs that will allow for automated verification. IRS plans to hold paid preparers to Circular 230 standards of practice and will establish a new category of practitioner—registered tax return preparer. These paid preparers will be limited in their practice before IRS to preparing tax returns, claims for refund, and other documents for submission to IRS but will be required to adhere to professional ethical standards when doing so or face a penalty. Additionally, paid preparers who are supervised by an attorney, CPA, EA, enrolled actuary, or enrolled retirement plan agent at a law firm, CPA firm, or other recognized firm and do not sign tax returns but obtain a PTIN, while not being granted rights to practice before IRS, will be required to meet the same standards. The RPO Director said that if paid preparers are denied a PTIN or have their PTIN revoked, paid preparers will have the right to appeal this denial or revocation in the same manner as other Circular 230 sanctions. Applicability of the new paid preparer requirements will vary by type of paid preparer, as shown in table 1: According to the RPO Director, some types of paid preparers will be exempt from the new competency testing and continuing education requirements because they are subject to competency testing and continuing education requirements set by their professional licensing bodies. IRS proposed regulations amending Circular 230 will require individuals (see table 1 above) to pass a competency test to become an officially registered tax return preparer. Paid preparers who have a valid PTIN before competency testing is available will have until 2013 to pass a competency test. Paid preparers who register for a PTIN after testing is available must pass a competency test before obtaining a PTIN. The RPO Director said that IRS is allowing this delay in testing for preparers who register for a PTIN before testing is available to encourage paid preparers to register for a PTIN as soon as possible while giving them time to prepare for the competency test. The RPO Director also said that IRS plans to develop and implement one competency test for individuals who prepare returns from the individual tax return (Form 1040) series and will assess whether IRS needs to add additional tests in the future. The RPO Director also said that IRS plans to have the test available at national and international locations, which will allow individuals to consult forms and instructions during the test, and that individuals will pay a fee each time they take the test. After completing the competency test, registered tax return preparers will be subject to suitability checks, which IRS plans to conduct to determine whether the individual has engaged in disreputable conduct. According to the RPO Director, IRS plans to link suitability checks for registered tax return preparers to the competency test so that when paid preparers take the competency test they will be fingerprinted, thereby submitting to a suitability check. IRS plans to implement a continuing education requirement, whereby registered tax return preparers (see table 1 above) will be required to take 15 hours of training annually—3 hours of federal tax law updates, 2 hours of ethics, and 10 hours of additional federal tax topics. The RPO Director said that IRS plans to approve continuing education providers and audit a random sample of continuing education courses. In support of the new requirements for paid preparers, IRS established a communications team and vested in it responsibility for educating paid preparers and taxpayers about the new requirements. In prior reports, we have discussed the importance of focusing on external communications as a key internal control standard and identified key practices for communicating with the public about a new initiative. In line with key practices for communicating with the public about a new initiative, IRS’s communications team, for example, prepared an action plan, identified stakeholders to engage, and developed a standardized message that it distributed in different formats—by presentations at IRS Nationwide Tax Forums, executive talks to industry groups, and written correspondence with tax professionals. Consistent with established criteria for improving the usefulness of communication, the RPO Director said that IRS plans to develop secure online mailboxes for paid preparers that will be used for IRS-paid preparer communication. The official leading IRS’s communications team said that IRS has not yet developed a plan for how it will monitor and evaluate the success of its outreach efforts, a key practice for communicating with the public. Since the requirements are just beginning to be implemented, the effectiveness of the outreach campaign will not be known until after the requirements are implemented. Officials and members of paid preparer associations we interviewed said that IRS has conducted an effective outreach campaign. However, officials and members of one paid preparer association we interviewed worry that some paid preparers remain unclear about the applicability of the new requirements to certain types of paid preparers, and officials and members of two paid preparer organizations we interviewed said that some paid preparers have likely not heard of the new requirements. In its strategic plan for 2009-2013, IRS established strategies designed to help it meet its objective of ensuring that paid preparers adhere to Circular 230 standards of practice and follow the law, including penalizing paid preparers who do not follow tax laws and leveraging research to identify fraudulent and noncompliant paid preparers. According to the RPO Director, IRS plans to implement initiatives intended to ensure paid preparers’ compliance with the new requirements but has yet to make many decisions because it is waiting for information from the PTIN registration system that will allow it to implement effective initiatives. During the first year after the PTIN requirement has been implemented, IRS plans to focus on bringing paid preparers into compliance and improving its communications and outreach, and not on penalizing paid preparers for noncompliance, according to the RPO Director. For example, the RPO Director said that IRS plans to contact paid preparers who file returns signed with an old PTIN, a SSN, or other identification number after the filing season is over. The RPO Director said that IRS will direct them to obtain a PTIN that will retroactively cover their practice during the recently completed filing season. The RPO Director also said that in cases of egregious noncompliance, such as paid preparers ignoring an IRS contact directing them to use a PTIN, IRS plans to contact paid preparers directly. IRS has the authority to penalize paid preparers who are required to but fail to include a PTIN on a tax return. IRS has undertaken one initiative for ensuring paid preparer compliance with the new requirements and is evaluating other future compliance initiatives. IRS sent letters in November 2010 to 10,000 paid preparers to remind them of their responsibility to comply with requirements for paid preparers, including registering for a PTIN. According to officials involved in the implementation of the new requirements, IRS is visiting some of the paid preparers who received letters to confirm their compliance based on an analysis of IRS visits to paid preparers in 2010. An official involved with the implementation of the new requirements also said that IRS plans to evaluate the results of the visits. The RPO Director also said that IRS plans to identify individuals who prepare tax returns for others but do not sign the tax return as paid preparers, and is currently evaluating methods by which it might do so. Additionally, the RPO Director said that IRS seeks to develop a risk-based scoring model to maximize the efficacy of its compliance efforts. IRS plans to launch a publicly accessible database of all registered paid preparers by January 31, 2014, so that taxpayers can check whether a paid preparer has registered. The RPO Director said that the database will likely include preparers’ contact information, whether or not preparers have passed the competency test, professional credentials, and tax preparation legal problems, if applicable. The RPO Director also said that IRS will not launch the database until it is sure it has the capability to rapidly respond to any associated problems with the data because paid preparers mistakenly identified as noncompliant could be negatively affected financially. IRS is funding the administration of the paid preparer requirements through user fees for PTIN registration, competency testing, and continuing education. IRS has only determined the user fee for PTIN registration so far, which is $50 per PTIN. IRS contracted with a vendor to establish and maintain the PTIN registration system, and the vendor will charge a $14.25 fee, bringing the total fee for PTIN registration to $64.25. In determining the level of the PTIN registration user fee, IRS has taken actions or made plans consistent with established criteria for setting user fees and using the resulting revenue. These criteria, which we identified in prior reports, include a set of key questions that should be considered when designing and implementing user fees and best practices for developing cost estimates. Key questions to consider when designing and implementing a user fee are contained within four primary components: setting, collecting, using, and reviewing. Table 2 below shows key questions to consider when setting a user fee, key criteria for establishing a credible estimate of a program’s costs, and IRS’s actions in setting the PTIN registration user fee. IRS identified key costs associated with PTIN registration and grouped them into five categories: (1) foreign paid preparer registration processing, (2) paid preparer program compliance, (3) communications and customer support, (4) IT, and (5) operations support. Approximately 75 percent of the costs IRS plans to cover with the PTIN registration user fee are variable and are contained within the two categories of foreign paid preparer registration processing and paid preparer program compliance, which includes tax compliance and criminal background screenings for paid preparers. To calculate the cost of foreign paid preparer registration processing, IRS estimated the number of PTIN registrants who will be foreign paid preparers and the cost to process each registration. To calculate the cost of screening paid preparers for tax compliance and a criminal background, IRS extrapolated to the paid preparer requirements the costs of screening individuals applying to become IRS e-File providers for tax compliance and a criminal background. The RPO Director acknowledged that these estimates are uncertain and therefore the actual costs could be higher or lower. Approximately 25 percent of the costs IRS is planning to cover with the PTIN registration user fee are fixed and are contained within the remaining three categories: communications and customer support, IT, and operations support. For these three categories, IRS estimated various component costs, including staff salary and benefits. IRS developed these cost figures assuming that as many as 1.2 million individuals will register for a PTIN, an estimate based on the number of individuals who signed tax returns as paid preparers in 2006 with a PTIN, SSN, or other identification number. IRS has acknowledged that this estimate is uncertain. Because these costs are fixed, their average will depend on the number of paid preparers who register for a PTIN. In addition to setting the user fee, there are key questions that should be addressed when implementing a new user fee that cover collecting, using, and reviewing the fee. Table 3 shows IRS’s actions to date, as well as planned actions, to address these key questions. An official who helped to estimate the PTIN registration user fee acknowledged that the PTIN registration cost estimates are uncertain and subject to change. The official stated that IRS plans to conduct a first review of the PTIN registration user fee in the summer of 2011. Additionally, the RPO Director said that IRS will be able to change the user fee following the review if actual costs are higher or lower than predicted. IRS has discussed but not documented a framework for how it plans to develop service and enforcement efforts that leverage the new paid preparer requirements to improve taxpayer compliance. Likewise it has not developed a framework for evaluating the effect of any planned service and enforcement efforts or the effect of the requirements themselves on improving taxpayer compliance. One of IRS’s goals for the paid preparer requirements is to better leverage the tax preparer community to improve taxpayer compliance. The RPO Director shared with us ideas on how to achieve that goal. For example, according to the RPO Director, IRS plans to develop a comprehensive database containing information on paid preparers and the tax returns they prepare. IRS plans to use information from this database to test which strategies are most effective for improving the quality of tax returns prepared by different types of paid preparers. Likewise, IRS has discussed how to measure the effect of the requirements, for example, the effects that requiring continuing education and testing have on tax return accuracy. In planning, the RPO has included other IRS divisions, such as the Small Business/Self- Employed division, which is responsible for examining tax returns, and the Research, Analysis, and Statistics unit, which will help monitor and evaluate whether the new requirements improve taxpayer compliance. Although IRS discussed with us its planned approaches for using the requirements to improve taxpayer compliance, it has not yet produced a document that lays out this approach. Likewise, as discussed previously, IRS has yet to decide how it will enforce paid preparers’ compliance with the requirements. Documenting a framework for using the requirements and measuring their effect is consistent with three steps we found leading public sector organizations take to increase the accountability of their initiatives: (1) define clear missions and desired outcomes; (2) measure performance to gauge progress; and (3) use performance information as a basis for decision-making. IRS has defined an overarching desired outcome of increasing taxpayer compliance and could increase its accountability by including the next two steps in a documented framework. Likewise, we have reported that it is important to develop assessment plans prior to full project implementation in order to ensure that the data necessary for evaluation are collected. We also previously reported that we were unable to assess whether California’s and Oregon’s paid preparer requirements led to improved return accuracy because data were not available on return accuracy prior to the enactment of the requirements. Both IRS and we have acknowledged the importance of measuring performance, including using baseline data and having intermediate and end outcomes. Since the PTIN registration requirement has been implemented and IRS plans to implement the other requirements gradually, it is important for IRS to identify and collect baseline data to have a basis by which to measure the effect of the requirements, IRS’s strategies to leverage the requirements to increase taxpayer compliance, and the strategies’ relative costs. In addition, we have also reported that establishing a timeline that includes critical phases and essential activities that need to be completed by particular dates to achieve results is important for accountability. The timeline can help pinpoint performance shortfalls and gaps, suggest midcourse corrections, and demonstrate progress toward goals. The RPO Director stated that IRS decided to begin implementing the requirements before determining how to use them to improve taxpayer compliance and measure their impact because it would take less time than waiting to implement all of the requirements until it documented its plans. As noted above, IRS’s approach to implementing the requirements is sequential, so the details of its compliance strategy will not be known for some time. However, not documenting the basic framework being followed may create problems. The lack of a documented framework may have negative repercussions for several reasons. First, without a documented framework, the various IRS divisions and offices involved in implementing the new requirements may have difficulty assessing whether there is a sound analysis plan and whether adequate plans are in place to collect the data needed to carry out the analysis. Without such assessments IRS is at risk of incurring additional evaluation costs by, for example, conducting unplanned data analyses, collecting irrelevant data, or failing to collect needed data in a timely manner. Second, the RPO Director stated that IRS does not know when the comprehensive database on paid preparers will be completed because there are many competing priorities for IRS resources. A documented framework with proposed steps and a timeline could help IRS make more informed resource allocation decisions. Third, members and officials from paid preparer associations whom we interviewed stressed that it is essential for IRS to evaluate whether the requirements are improving taxpayer compliance, and some stated that the requirements will be worthwhile only if they result in an improvement. The impact of these requirements depends on the compliance of paid preparers and paid preparers bear the burden of complying with the requirements. Demonstrating to paid preparers that IRS will evaluate whether the requirements provide the benefit of improved taxpayer compliance could improve preparers’ voluntary compliance with the requirements. The framework will likely evolve over time and become more detailed. Initially, the framework may be a high level road map for achieving taxpayer compliance results sooner and perhaps at a lower cost and could include information on IRS’s strategies and tactics for improving taxpayer compliance and what data need to be collected now. The framework may change as IRS assesses the effectiveness of the paid preparer requirements and future strategies for using the requirements to improve taxpayer compliance. IRS has made much progress in starting to implement the new paid preparer requirements, including educating paid preparers about the requirements, implementing the PTIN requirement, and developing a PTIN user fee. In order to launch this important initiative, IRS began implementing the requirements before laying out strategies for how to leverage them and measure their impact in an effort to realize benefits sooner. Implementation is under way, but IRS has not documented a framework for how to achieve the goal of improving taxpayer compliance. Without such a documented framework to guide its overall effort, IRS may not adequately or effectively identify and collect key baseline data now, modify its strategies to improve outcomes, allocate its resources most effectively given competing priorities, or maximize paid preparers’ compliance with the requirements. Initially, the framework may not be detailed. Instead it may evolve as IRS develops and assesses additional strategies. We recommend that the Commissioner of Internal Revenue document a strategic framework showing how IRS intends to use the paid preparer requirements to improve taxpayer compliance and assess their effectiveness. In a letter commenting on a draft of this report, IRS agreed with the recommendation. IRS stated that it has begun working on a strategic framework and plans for the final product to detail the overall mission, vision, and overall goals to ensure return preparer oversight will ultimately achieve improved taxpayer compliance and tax administration. IRS also provided technical comments, which we incorporated as appropriate. IRS’s comments are reprinted in appendix II. We are sending copies of this report to the Commissioner of Internal Revenue and other interested parties. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To describe IRS’s plans for implementing and ensuring compliance with the new paid preparer requirements, we reviewed documents, including Treasury’s proposed and final regulations containing the new requirements. Additionally, we interviewed officials from IRS’s Return Preparer Implementation Project Office and Return Preparer Office (RPO) responsible for implementing the paid preparer requirements. To describe IRS’s outreach campaign to inform paid preparers of the new requirements, we reviewed IRS documents on communication with external stakeholders and the public about the new requirements. Additionally, we interviewed the official responsible for leading IRS’s communication with external stakeholders and the public. We analyzed this information against key communications internal control standards we identified in GAO’s Internal Control Management and Evaluation Tool and key practices for communicating with the public about a new initiative that we identified in GAO’s Digital Television Transition: Increased Federal Planning and Risk Management Could Further Facilitate the DTV Transition. To assess IRS’s resource estimates to develop and implement the new requirements, we reviewed IRS documents on the preparer tax identification number (PTIN) user fee that IRS is charging paid preparers for obtaining a PTIN and interviewed officials from the Return Preparer Implementation Project Office, RPO, and IRS’s Chief Financial Officer’s office. We examined this information using key questions that agencies should consider when developing and implementing user fees that we identified in GAO’s Federal User Fees: A Design Guide and best practices that agencies should follow when developing cost estimates that we identified in GAO Cost Estimating and Assessment Guide. In table 2, we examined key questions to consider when setting a user fee and key criteria for establishing a credible estimate of a program’s cost, and IRS’s actions in setting the PTIN registration user fee. In table 3, we examined key questions to consider when collecting, using, and reviewing a user fee, and IRS’s actions and planned actions in collecting, using, and reviewing the PTIN registration user fee. We determined whether IRS had considered the key questions and criteria and did not examine the appropriateness of the specific program costs that IRS plans to fund with the PTIN user fee. To assess IRS’s plans to use the requirements to improve taxpayer compliance and evaluate the effect of the paid preparer requirements, we reviewed IRS documents and interviewed Return Preparer Implementation Project Office and RPO officials. We examined this information using IRS’s plans in the December 2009 Return Preparer Review and its guidance on measuring performance in the Internal Revenue Manual Exhibit 1.5.1-5, Process to Create a Performance Model for New (or Revised) Programs. We also examined this information using our past work on evaluating a program in Agency Performance Plans: Examples of Practices That Can Improve Usefulness to Decisionmakers; Executive Guide: Effectively Implementing the Government Performance and Results Act; Tax Administration: Planning for IRS’s Enforcement Process Changes Included Many Key Steps but Can Be Improved; Designing Evaluations; Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations; and Tax Preparers: Oregon’s Regulatory Regime May Lead to Improved Federal Tax Return Accuracy and Provides a Possible Model for National Regulation. In addition, for all three objectives we interviewed members and officials of paid preparer associations that IRS had convened as industry stakeholders, which included the major types of paid preparers that IRS intended the requirements to cover. These associations were the American Bar Association, American Institute of Certified Public Accountants, National Association of Enrolled Agents, National Society of Accountants, and National Association of Tax Professionals. We also interviewed a representative from H&R Block, a retail tax return preparation chain that IRS consulted as part of an independent preparer panel and representatives of two additional return preparer associations, the American Payroll Association and American Society of Pension Professionals and Actuaries, that included types of paid preparers that IRS, at the time, intended the requirements to cover. IRS has since decided that individuals who prepare only employee benefit plan returns are not covered by the requirements. We summarized the members’ and officials’ responses to a variety of questions about the paid preparer requirements. We shared the criteria on which we based our descriptions and assessments in our three objectives with IRS during the course of our audit work. We conducted this performance audit from July 2010 to March 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact person named above, Jeff Arkin, Assistant Director; Amy Bowser; Maya Chakko; Ellen Grady; Donna Miller; Cindy Saunders; and Dan Webb made key contributions to this report.
|
Paid preparers prepare about 60 percent of all tax returns filed, and their actions significantly affect the Internal Revenue Service's (IRS) ability to administer tax laws. Previously, GAO found that some preparers made significant errors in preparing tax returns and proposed stricter regulation of preparers. IRS is implementing new requirements for paid preparers that it believes will increase tax compliance, which will reduce the gross tax gap between taxes owed and taxes paid, last estimated at $345 billion for 2001. GAO was asked to (1) describe IRS's plans for implementing and ensuring paid preparer compliance with the requirements; (2) assess IRS's resource estimates for the requirements; and (3) assess IRS's plans to use the requirements to improve taxpayer compliance and evaluate their effect. To meet these objectives, GAO reviewed IRS planning documents and interviewed IRS officials and representatives and members of paid preparer associations. IRS has implemented a registration requirement for paid preparers that includes obtaining a preparer tax identification number (PTIN) and plans to implement competency testing and continuing education requirements. IRS also plans to require paid preparers to adhere to standards of practice and the revisions are currently being reviewed by the Office of Management and Budget. In addition, IRS has conducted an outreach campaign consistent with key practices to inform paid preparers of the new requirements. For example, IRS developed a standardized message that it distributed in different formats. IRS is developing strategies for how to ensure that paid preparers comply with the new requirements, according to the director of IRS's Return Preparer Office. IRS is funding the paid preparer requirements through user fees, which it is setting consistent with established criteria for cost estimating. For example, in setting the PTIN user fee to ensure it covered program costs, IRS identified key costs associated with registration, estimated fixed costs, and based some variable costs on similar registration efforts. IRS has discussed but not documented a framework for how it plans to use the requirements to improve taxpayer compliance. For example, IRS plans to develop a comprehensive database containing information on paid preparers and related tax returns. Also, IRS has yet to document how it will assess the requirements' effect, for example, by identifying what baseline data IRS needs. Without a documented framework, IRS may have difficulty (1) assessing whether it has adequately planned for what data it needs to collect and (2) deciding how to allocate resources given competing priorities. A framework could also help assure paid preparers, who bear the burden of complying with the requirements, that IRS will assess whether the requirements provide their intended benefit. GAO recommends that IRS document a framework for using the paid preparer requirements to improve taxpayer compliance and evaluate their effect on taxpayer compliance. In commenting on a draft of this report, IRS agreed with the recommendation.
|
The U.S. government participates in a number of international organizations established to serve specialized but limited functions. Membership in these organizations is generally restricted to national governments, and they have comparatively small budgets. Although some of the organizations permit nongovernmental entities to participate in their activities, only member governments have voting rights to set policy agendas and budgets (with one exception, the World Conservation Union). The organizations depend largely on membership dues to finance their operations, but each uses a different basis to assess contributions from member governments. In most instances, the organizations permit memberships to be withdrawn only after 1 year’s prior written notification. In 1995, State received funding for 26 “other” special-purpose international organizations through appropriations made to its Contributions to International Organizations account. Our review included all of these organizations except the World Trade Organization, which was in an early formative stage. As your office requested, we also included in our review two inter-American organizations (the Inter-American Indian Institute and the Pan American Railway Congress Association). Table 1 shows 1995 data on the U.S. government’s assessed dues for the 27 organizations we reviewed, the U.S. assessment rates, and the percentage of professional staff who are U.S. citizens for each organization. In response to congressional directives, State conducted a comprehensive review beginning in May 1995 to decide whether each international organization to which it makes assessed contributions continued to serve important U.S. interests. However, State did not report to the Congress on the results of this review until December 1996 (after it was asked to comment on a draft of our report). State officials told us the review consisted of a series of interagency meetings to discuss raw assessment data provided by the key U.S. stakeholders in the organizations, but that they did not prepare a formal record of the review or at that time prioritize funding by organization. State officials said that assigning a priority to each organization would have been very difficult, given the differences in their size, mission, cost, and program effectiveness. Nonetheless, as a result of its review, in December 1995 State informed three entities—the International Cotton Advisory Committee (ICAC), the Pan American Railway Congress Association, and the World Tourism Organization—that the United States intended to withdraw from them. State acknowledged that the withdrawals were budget driven, but it also justified the withdrawals on the basis that the private industries that were the focus of these organizations were already adequately served. A State official said that while there may be other organizations that the United States could withdraw from in the future, decisions on withdrawal would likely continue to be hindered by a lack of quantitative performance indicators for each organization and by objections raised by the organizations’ political supporters and constituency groups. In testifying on the administration’s fiscal year 1997 budget request before a House Subcommittee on May 2, 1996, the U.S. Permanent Representative to the United Nations said the criteria that are applied in determining whether to retain membership in international organizations are (1) the level of direct U.S. benefit in political, strategic, or economic terms determined on the basis of consultations with end users; (2) the percentage of the organization’s budget that is devoted to activities that benefit the United States; (3) the scope and depth of the U.S. constituency; (4) the relevancy of the organization’s mandate to contemporary global issues; (5) the organization’s program effectiveness and quality of management; (6) the organization’s budgetary restraint and transparency; and (7) the organization’s responsiveness to the U.S. government’s overall reform efforts. State’s December 1996 report to the Congress assembled the 50 organizations, including the 27 discussed in this report, into 3 broad cluster groups according to a priority ranking based on the importance of their mandates to the U.S. national interest and their cost-effectiveness. These cluster groups, in order of priority, were (1) peace and security; (2) health, safety, and economic well-being; and (3) selective interest. Our analysis indicated that none of the 27 organizations discussed in this report were included in State’s top priority category (peace and security); 4 were in State’s second priority category (health, safety, and economic well-being); 20 were in State’s third priority category (selective interest); and 3 were no longer being funded by State. As a further delineation of priority, State’s report showed that, of the four organizations discussed in our report that fell within the second priority category, contributions to one would be reduced by 18 percent and contributions to three would be reduced 19 percent from the full requirement for fiscal year 1997. Similarly, of the 20 organizations discussed in this report that were in State’s third priority category, contributions to 11 would be reduced by 19 percent; 1 would be reduced by 21 percent; 7 would be reduced by 23 percent; and 1 would receive no funding for fiscal year 1997. (See tables 2 and 3.) For most of the organizations that we examined, U.S. government officials we contacted believe either that the benefits derived from them clearly exceeded the cost of membership or that it was very worthwhile for the United States to be represented and have an active voice in their activities, but there were mixed views on the value of continuing membership in some organizations. U.S. government officials also stated that in many cases the organizations serve specific U.S. government or commercial interests that cannot be served as efficiently by other means. Further, they considered most of the organizations’ program focus to be generally clear, valid, and in conformity with U.S. interests, but some primarily benefited their related industries. U.S. officials in many instances were active and influential participants in the organizations—often serving on their governing boards and with some Americans serving in top management posts. In general, U.S. government participation in these organizations is designed to help ensure that U.S. interests are fairly and equitably considered in international commercial activities and disputes, and that the United States has access to vital public health, transportation safety, and other information. U.S. participation also allows active engagement in exchanging and promoting ideas for reducing trade barriers, unifying common standards of business trading practice (such as weights, measurements, and quality control), influencing environmental policy and providing voluntary support for conservation programs and sustaining endangered natural resources, and deliberating other issues of broad public interest. These are matters that officials from the relevant agencies told us the U.S. government either cannot do alone or cannot address as effectively through other bilateral or multilateral means. Nonetheless, there may be opportunities for cost savings in some of the organizations. For example, the assessed U.S. rates for two organizations (the Customs Cooperation Council and the International Center for the Study, Preservation, and Restoration of Cultural Property), both based on U.N. formulas at 25 percent, are significantly higher than most of the other special-purpose international organizations. Although U.S. officials see no viable alternative at this time to membership in the customs organization to support the broad trade interests it serves, its work is closely tied to that of the World Trade Organization to which the United States pays a much lower (15 percent) assessment rate. The cultural property organization by contrast has a narrow and important national historic (but not foreign policy-related) constituency and, though U.S. officials generally consider it to be well managed, the benefits are difficult to quantify and some officials believe that they do not appear to be proportionate to the cost. Our review also found that State has addressed to some extent the issue of whether functions or organizations could be combined or whether similar services were available from other sources that could eliminate possible areas of overlap and duplication. For example, a possible merger of some functions between organizations (including the IBWM/IOLM and IARC/World Health Organization) had been identified and was being examined by the respective organizations as a way to achieve cost savings. Also, rapid technological change may soon permit private sector sources to translate customs tariff schedules at less cost than IBPCT, and we found some areas of possible overlap between certain organizations, such as those involving the tropical timber (ITTO) and vine and wine (IOVW) groups and the Food and Agriculture Organization that U.S. officials had not fully addressed or resolved. We noted that five commodity organizations—four that produce and disseminate market data and one that helps stabilize raw material supplies and prices through a stock fund—were all designed to primarily benefit their related industries; and officials we interviewed indicated that three others in which the U.S. government participated had minimal benefits. However, as discussed below, there are also reasons for retaining membership in them. The primary functions of ICAC, ICSG, ILZSG, and IRSG are to produce information on worldwide production and consumption of individual commodities, information that primarily benefits the related industries but provides less direct or essential benefit to the U.S. government. Nonetheless, there are benefits to U.S. membership. According to government officials we interviewed, the information the organizations develop on worldwide production and consumption of the respective commodity is objective and current, and generally not available elsewhere. In addition, the organizations provide a useful forum for encouraging or promoting intergovernmental and business cooperation and exchanging views on matters of joint interest without violating antitrust laws. However, based on the criteria adopted by State, the question appears to be whether government or public interests are sufficiently served by membership in these organizations to justify continued financing of activities that primarily benefit specific U.S. industries. U.S. membership in the organizations seems to be especially important to specific industry groups, which participate actively in them at their own expense. They send representatives to parliamentary meetings and working group sessions (their experts have been selected to serve on technical study groups), and they finance cooperative projects, and generate subscriptions and other fees that reduce the cost to member countries. The International Natural Rubber Organization administers an international natural rubber agreement, which the United States has participated in since it took effect in 1979. The agreement was designed to reduce price volatility and ensure an adequate supply of natural rubber by managing a buffer stock. In September 1996, the U.S. Senate ratified the agreement to participate for an additional 4 years. As the world’s largest consumer of natural rubber and with just three countries—Thailand, Indonesia, and Malaysia—producing 75 percent of the world’s rubber supply, the United States has a significant interest in assuring an adequate long-term supply of this commodity at reasonable and stable prices. The executive branch supported the agreement’s extension, but expressed a preference for free market forces to operate in the belief that they better serve the interest of consumers and producers. However, it believed that the rubber industry needed more time to develop alternative institutions to manage market risk. Nevertheless, several unresolved issues emerged during the debate, including whether the agreement resulted in lower prices for U.S. consumers and whether the level of cash reserves used to support it—the current U.S. share of which is about $80 million—is needed and adequately safeguarded. The executive branch has made clear its intention that this will be the last agreement extension the United States will join. U.S. participation in the Interparliamentary Union (IPU) is within the provenance of the Congress and not a matter for the executive branch to decide. IPU was the first worldwide political organization to promote the concept of international peace and cooperation. While its goals are similar to those of the United Nations, IPU differs from it in that it seeks to improve personal contact between delegates of member nations’ parliamentary groups by restricting membership to elected participants of these legislative bodies. The United States participated in its first meeting and has been a member since its establishment in 1889. Membership gives congressional delegates the opportunity to discuss with foreign colleagues—especially those from emerging democracies—U.S. principles of multiparty democracy and rule of law. IPU also enables them to share their experiences relating to the legislative process and executive-legislative-judiciary relations. However, Members of Congress have not been active IPU participants in recent years. We found that no Senator has attended any IPU meeting since 1989, and no Representative has attended any IPU meeting since March 1994. State officials and congressional staff attributed the inactive U.S. participation in the organization in recent years to changes in the Congress and inconvenient scheduling of IPU meetings (its meetings are normally held in April and September when the Congress is in session, making it inconvenient for members to attend). IPU also sought to raise the U.S. assessment rate from 12.58 percent ($1.1 million in 1995) to 15 percent, or above a statutory limitation of 13.61 percent. The administrative responsibility for IPU shifts with each Congress and, for the 104th Congress, it rested with the House of Representatives (administered by the Clerk’s Office). Fiscal year 1996 appropriations legislation initially held up IPU funding until IPU agreed to reverse the proposed assessment increase and adjust its schedule to better accommodate U.S. participation. The House leadership subsequently agreed to continue U.S. participation in IPU and maintain the assessment at the prevailing rate. The Bureau of International Expositions provides for orderly scheduling and planning of international expositions. As such, it primarily serves those member governments whose cities are vying to hold such events. The United States joined BIE in 1968, 40 years after its creation, with the aim of ending a then-existing proliferation of officially sanctioned expositions and assisting U.S. cities that were bidding to host them. Since then, the frequency of expositions has been drastically reduced and no U.S. city is currently seeking to host any scheduled international exposition. Moreover, recent funding for U.S. pavilions at expositions has been provided entirely from the private sector. The U.S.-assessed contribution for BIE is modest ($33,000 in 1995), but it pays the highest assessment rate (8.9 percent) of any member nation. The assessment rate is based in part on the U.N. scale of assessments and on the member states’ size and economic production. State and other agency officials said that there was strong sentiment both in favor of and in opposition to U.S. membership in BIE. Proponents argue that the membership could be justified if the federal government seeks to continue to officially support and maintain an active role in determining where and how future world’s fairs are to be held. They further contend that it might be in the public interest to assist potential sponsors in attaining the rights to hold future events since memberships are limited to national governments and BIE members are in more advantageous policy decision-making positions. However, those who oppose continued U.S. membership in BIE say that such official sponsorship is unnecessary and that the chief U.S. goal of more orderly scheduling of worlds’ fairs has been met. IAII, a specialized organization of the Organization of American States (OAS), serves as a research center and forum for member states to plan for the economic, social, and cultural advancement of Native Americans. Although U.S. budget support has demonstrated solidarity with Central and South American countries that have large Indian populations, U.S. officials have been dissatisfied with IAII management and its activities in recent years and have shown little interest or involvement in the organization. In response to reform efforts encouraged and led by Mexico and the United States, IAII installed a new director in 1996 who is reported to be making positive structural changes in the organization. In the meantime, State has adopted a “wait-and-see” approach regarding future U.S. funding and participation. The United States does not recognize IAII’s assessment rates, which are based on outdated Indian population figures. Instead, it has capped its annual assessment contribution at $120,000 annually, which in 1995 represented 44 percent of IAII’s budget. Although the rate is high relative to other participants (Mexico paid 30 percent in 1995, with no other country paying more than 4 percent; Canada is not a member), it is less than what the United States would have to pay if the assessment rate were based on the current OAS scale (59 percent) or on gross national product data (estimated at 80 percent). No funding was provided to IAII in fiscal year 1996 and congressional conferees have agreed that none should be given in fiscal year 1997. State officials said they recognize that stringent government budgets make it imperative that costs be kept low in all areas, including the cost of membership in international organizations. Thus, they have attempted to link funding decisions for the small special-purpose international organizations to performance indicators, established a more systematic budget review and coordination process, and tried to secure increased private sector funding for the organizations in an effort to keep assessed contributions low. State’s Bureau for International Organization Affairs is responsible for these efforts and is assisted by the designated State contact point and interagency group that have the lead or significant program responsibility for U.S. interests in the international organization’s work. Travel and accreditation to conferences are handled by State’s Office of International Conferences. In June 1995, State’s Bureau for International Organization Affairs revised its budget policy from one of having zero real growth for U.S. participation in international organizations (which had been in effect since 1986) to one of seeking actual reductions in the organizations’ budgets through a combination of improved program management, structural reform, and indicators that can be used to measure management performance. Exceptions to this policy were to be dealt with on a case-by-case basis. According to Bureau officials, the budget review process has been facilitated by requiring the organizations to submit audited financial statements and closely coordinating U.S. budget positions with officials from the U.S. agencies having lead programming responsibility. Bureau officials make the final determination concerning the U.S. position on an organization’s budget and provide instructions to U.S. delegates in advance of the organizations’ budget conferences. U.S. delegates to these budget conferences are encouraged to seek out and build coalitions for consensus on cost-cutting and reinvention measures with other like-minded member nations for improved leverage. They are instructed to vote against or abstain from voting on program budgets if the U.S. budget targets are not met—and they have done so. Over the past year, in consonance with State’s new and more restrictive budget policy, U.S. delegates were obliged to cast negative votes on several organizational budgets—including the International Agency for Research on Cancer, the International Copper Study Group, the International Seed Testing Association, and the International Bureau of the Permanent Court of Arbitration—although other than signaling a U.S. determination to oppose unwarranted budget increases, it is not clear what impact these votes may have had. Nonetheless, U.S. delegates succeeded in rolling back some other proposed budget increases through consensus actions with other member states. Although not specifically related to assessments, State officials said they are also employing a more restrictive policy on sending delegates to the organizations’ meetings. This should enable them to reduce travel costs for U.S. government delegates attending the organizations’ meetings. Usually, State seeks to cover such meetings with staff that are assigned to local embassy posts or funds a single designated representative from the department or lead agency (which may fund travel for additional representatives out of its own budget). According to data provided by State’s Office of International Conferences, as of March 1996, it had spent about $166,000 for staff travel to 15 of the 27 small organizations’ functions during the preceding 18 months; it did not fund any travel to 10 of the organizations’ conferences during this period. State also accredits but does not provide any funding for private sector participants. While State has authority to accept gifts under certain circumstances, it does not accept contributions from private sources to pay for assessed dues to international organizations. The Foreign Affairs Manual prohibits it from accepting gifts from any outside source that could create an appearance of conflict of interest between the donor and the performance of State’s responsibilities or might otherwise cause people to believe that accepting officials would lose objectivity or be influenced in their decision-making because of the donation. State has interpreted this guidance as precluding it from accepting contributions for assessed dues to international organizations from private sources. A State official said that while State serves industry interests to some extent, especially in its efforts to increase U.S. exports, it must do so in an objective manner—regardless of whether or not the donor has a stake in the outcome of any State action. Another State official told us that the use of gift contributions to fund such ongoing operational activities puts at risk State’s long-range ability to plan and carry out promised actions. Officials from other U.S. government agencies dealing with these international organizations agreed with State’s position. Nevertheless, State and other lead U.S. agencies have made some efforts to get private and nongovernmental organizations to contribute directly to these organizations with mixed results. For example, they have attempted to open or expand membership, on a nonvoting basis, to private sector participants. Some organizations (notably those engaged in conservation efforts such as the World Conservation Union and the International Center for the Study, Preservation, and Restoration of Cultural Property) currently receive a significant portion of their budgetary funds from associate memberships, revenue-producing activities, donations, and various sources other than assessed member state contributions. IGC, ICAC, and ISTA are all considering allowing industry organizations to be nonvoting members in an effort to raise additional revenue. However, there is opposition to these proposals in all three organizations. Most government members of ICAC oppose this idea, according to Department of Agriculture officials, because they fear industry representatives will then want to have a say in how the organization is run. Also, some IGC members have expressed concern over how the integrity of the organization would be maintained. They fear that IGC’s work would no longer be unbiased if industry representatives were included in all meetings. Another way in which State and the other agencies have sought to increase the organizations’ budgetary resources through private participation is to encourage interested private groups to contribute to voluntary programs or subscribe to publications or events that are run by some of the organizations. For example, U.S. industry and environmental groups have occasionally made small donations toward ITTO voluntary projects in which they were interested. However, there does not seem to be much organizational interest in expanding their contributions. Nonetheless, agency officials said that some organizations have had good success in raising revenue from projects or services, securing free office space and logistic support, and generating other extra-budgetary resources that have had the effect of reducing dues assessments to member countries. Other organizations, including ICSG, also do studies with private sector participation. Private participation also comes in nonfinancial forms. Representatives from industry and academia belong to the working groups and technical committees that do much of the work of ISTA and WRA, as well as providing advice and assistance in a number of other organizations. Private sector participation in these international organizations is usually conducted in uncompensated ways through the national delegation; the industry or trade associations bear the salaries and travel costs of its representatives. For some organizations, including IOE and WRA, non-U.S. government officials serve on the U.S. delegation as official members. For other organizations, industry officials attend as delegation observers, as in IOE, ICSG, and ILZSG, or can present their own positions as industry representatives, as in IOVW and ISTA. Private sector representatives also help formulate the U.S. delegation positions for international organizations. Industry representatives belong to interagency coordinating groups for many organizations. Industry and nongovernmental organizations also provide experts that serve other organizations where no formal coordinating group exists, such as CCC, ISTA, and ITTO. Department of Agriculture officials stated that there is room for more industry participation in IOE at the national level, but not in the IOE itself. State generally agreed with our report and our observations about the value of continued membership in certain organizations, but said that it had evaluated the need for continued U.S. participation in all of the international organizations as part of a continuous review process that began in May 1995. State added that it was on the basis of this review process that prioritization was achieved in the sense that some organizations were identified for withdrawal while others were not. Our draft report acknowledged the review process that State initiated in May 1995, and fully recognized State’s efforts in setting and refining its priorities for these international organizations. However, at the time of our review, State had not formally documented the results of its review process, and its first report was not submitted to the Congress until after our draft report had been provided to the Department. Moreover, neither the documentation that State provided to us during the course of our review nor its December 1996 report to the Congress fully explained the rationale for the judgments that were made. Our draft report took no position on either the level of resources that State needs to make contributions to the organizations discussed in this report or which organizations the United States may wish to withdraw from. However, given the likely decline in discretionary spending in the federal budget and the various proposals for reductions in State’s budget, our draft report contained proposed recommendations that the Secretary of State (1) specifically and systematically apply the criteria announced in May 1996 for retaining membership in international organizations to the organizations discussed in this report; (2) from this process, establish priority groupings or a priority ranking for retaining membership; and (3) report this information to the Congress along with State’s annual budget justifications. While we believe that our proposed recommendations continued to have merit, we also believe that State’s December 1996 report to the Congress began to respond to our concerns about the need to prioritize the funding of international organizations. Because State’s report indicated that “a rigorous assessment of U.S. participation in international organizations must be an ongoing process,” we are not making any recommendations at this time. Nonetheless, we believe that the process State began in May 1995 that culminated in the December 1996 report should continue. State’s comments are reprinted in appendix II. It also suggested some technical corrections and we have incorporated them into the report as appropriate. We conducted our review in Washington, D.C., primarily at the Department of State, in the Bureaus of International Organization Affairs and Economic and Business Affairs, and other State bureaus and offices. We interviewed State officials responsible for budget and program administration and reviewed policy documents, manuals, budget and financial documents, correspondence, assessment data, and background data on the organizations. We also held discussions with and obtained pertinent information from officials of other affected government agencies, including the Departments of Agriculture, Commerce, Health and Human Services, the Interior, Transportation, and the Treasury (including the U.S. Customs Service); the Office of Management and Budget; the National Institutes of Health (including the Cancer and Environmental Health Sciences Institutes); the Smithsonian Institution; the Office of the U.S. Trade Representative; the President’s Council of Economic Advisers and Advisory Council on Historic Preservation; the Congressional Budget Office; Congressional Research Service; the Secretary of the Senate; the Clerk of the House; and U.S. embassies in London, England; Brussels, Belgium; and Kuala Lumpur, Malaysia, to discuss organizations headquartered in those capitals. To determine how State assesses whether government membership in the 27 organizations continues to serve U.S. interests, we requested documentation that would identify and compare the specific objectives that the government sought to achieve in each of the organizations with the results or benefits derived. State provided us with copies of its budget justifications and supporting data, but these documents did not provide clear statements of U.S. goals or program strategies for each of the individual organizations. State officials said that although State had coordinated an interagency review of all international organizations in 1995, it did not formally document the results of this effort. Therefore, they said they could not show us how they made the determinations that continued government membership in the small international organizations served U.S. interests. Nonetheless, when State provided a copy of its December 1996 report to the Congress to us along with its comments on a draft of this report, we evaluated the report to determine whether it clearly stated the U.S. goals and program strategies for each of the organizations. We took State’s prioritization into account in finalizing this report. To examine State’s efforts to keep the government’s assessed contribution costs low, we studied the roles and responsibilities of key officials at State and other affected federal agencies, State’s budget policies and instructions to delegates, reports of meetings, and interviewed cognizant agency officials. In seeking to determine which organizations executive branch officials believe are more justified than others for continued government membership and participation, we relied primarily on the views of those government officials who had principal program responsibility for or contact with the organizations. While these officials were generally supportive of the organizations, we also solicited the views or opinions of independent experts and some who may have opposed continued participation. We discussed these issues with policy-oriented institutions in Washington, D.C., including the Cato Institute, the Heritage Foundation, and the National Policy Forum, and the U.N. Association of the United States of America in New York and Washington. We also reviewed congressional documents and spoke with staff members of House and Senate committees and offices to determine the congressional interest, concern, and provisions that apply to U.S. participation in these organizations. Since the review was aimed at executive branch management of U.S. membership interests, we generally did not contact the organizations directly—except in a few instances to obtain clarifying information. They included the Bureau of International Expositions, the International Natural Rubber Organization, the International Rubber Study Group, the World Conservation Union, the International Cotton Advisory Committee, the International Copper Study Group, and the International Lead and Zinc Study Group. For the same reason, we did not interview officials of other participating member states or interested private sector groups. We performed our review between January and December 1996 in accordance with generally accepted government auditing standards. We did not independently verify or review the organizations’ budgets. Appendix I provides supplemental background and assessment data on each of the individual organizations. The State Department’s comments on this report are shown in appendix II. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate and House Committees on Appropriations, the Senate and House Budget Committees, the Senate Committee on Governmental Affairs, and the House Committee on Government Reform and Oversight; the Secretaries of Agriculture, Commerce, Health and Human Services, the Interior, State, and the Treasury; the Permanent Representative of the United States of America to the United Nations; the Administrator of the U.S. Agency for International Development; the Directors of the U.S. Information Agency and the Office of Management and Budget; the U.S. Trade Representative; the Chairmen of the Council of Economic Advisers and the Advisory Council on Historic Preservation; and the Secretary of the Smithsonian Institution. Copies will be made available to others upon request. Please contact me on (202) 512-4128 if you or your staff have any questions concerning this report. Major contributors to this report were LeRoy W. Richardson, Rolf A. Nilsson, and Edward D. Kennedy. This appendix provides supplemental data on the 25 international organizations covered in this study that received funds from the Department of State in 1995. The data was compiled from various sources, including State budget documents, reports submitted by the individual organizations, and interviews conducted with cognizant agency officials, and gives a brief discussion of significant issues that we observed in the course of our study. We did not prepare summary data sheets on the Pan American Railway Congress Association (PARCA) and the World Tourism Organization (WTO) because State had notified them as of December 1995 that the United States would not continue its membership in them. To provide for orderly planning of international expositions by establishing intervals between different types of expositions, reviewing themes, and setting rules and requirements; and gives U.S. cities priority consideration when bidding for Bureau of International Expositions (BIE)-sanctioned events. U.S. share (percent) Convention of International Expositions, ratified by the Senate on April 30, 1968. The United States began participation in 1968. Bureau of International Organization Affairs, Department of State; U.S. Information Agency; Department of Commerce; Host U.S. cities and chambers of commerce/major industrial exhibitors. Helps to ensure that there will be no conflicts with and promotes increased foreign participation in U.S.-held expositions. Also, BIE membership provides access to deciding where events will be held and reductions in tariffs and various price concessions that defray the cost of membership. One year after date of receipt of withdrawal notification. While the U.S. contribution is modest, it pays the highest rate (8.9 percent) of any member nation (followed by Japan and Germany at 8.1 percent and four others at 4.5 percent). The assessment rate is based in part on the U.N. scale of assessments and economic production. U.S. membership in BIE lacks strong support in some quarters, but can be justified if the United States officially supports and participates in world fairs. A joint resolution passed by the Congress in December 1995 urged the United States to fully participate in Expo ’98 in Lisbon, Portugal, and encouraged private sector support for this undertaking. To obtain the highest possible degree of uniformity and harmony in and between the customs systems of its members; to prepare draft conventions and amendments; and to ensure uniform interpretation and application of the Customs Cooperation Council (CCC) convention, settle disputes, circulate information, and provide advice to governments. U.S. share (percent) The United States acceded to the convention creating CCC on November 5, 1970, which was also the initial date of U.S. participation (treaty). Bureaus of International Organization Affairs and Economic and Business Affairs, Department of State; Customs Bureau, Department of the Treasury; Department of Commerce; and the U.S. Trade Representative. Harmonization and simplification of customs procedures serve U.S. business interests by contributing to the creation of a stable and predictable international trading environment for U.S exporters and importers. This facilitates commerce while enhancing customs enforcement, particularly in intellectual property rights, textile transshipment, and drug smuggling. Withdrawal shall take effect 1 year after the receipt by the Belgian Ministry of Foreign Affairs of the notification of withdrawal. The member shall pay its full annual contribution for the financial year during which its notice of withdrawal becomes effective. CCC is responsible for technical work related to several World Trade Organization agreements. It harmonizes member states’ customs systems and provides training and assistance on a variety of customs enforcement issues. If the United States did not participate, it would lose these benefits—adversely affecting U.S importers and exporters. Customs sees no viable alternative to membership in CCC. To facilitate private international legal transactions and relationships, especially in the areas of family law, trusts and estates, and sales, through law unification by multilateral treaties. U.S. share (percent) Statute of the Hague Conference on Private International Law (HCOPIL—1951), entered into force for (and participated in since by) the United States, 1964. Bureau of International Organization Affairs and Office of the Legal Adviser, Department of State; Office of Foreign Litigation, Department of Justice; the American Bar Association; the National Conference of Commissioners on Uniform State Laws; the American Law Institute; and other national legal organizations. More predictable application of law to legal transactions and relationships that span international borders, resulting in fewer and easier resolutions of disputes, and an improved business climate. HCOPIL facilitates service of process abroad, eases intercountry adoption procedures, and lowers insurance rates, among other things. At the expiration of the budget year ending June 30, provided that notification of intent to withdraw has been received at least 6 months before the end of that budget year. None. To serve as a forum for developing information for member states to use in planning for the economic, social, and cultural advancement of Indians. U.S. share (percent) November 1940 convention providing for creation of the Inter-American Institute. The United States has been a member since 1941. Bureau of Inter-American Affairs, Department of State; Bureau of Indian Affairs, Department of the Interior; and tribal councils. Provides a policy forum and access to informational resources to address priority issues of concern for Native Americans and their governments. It has a substantial research library that is dedicated to indigenous issues. One year notification required for withdrawal. The Institute has experienced management problems in the past, prompting the State Department to acknowledge that it was poorly managed. However, it is currently undergoing a major reform effort that has been sought and encouraged by the United States. Consequently, the State Department is taking a “watch-and-wait” approach toward continued U.S. funding and participation. The U.S. assessment share (44.1 percent) outpaces that of Mexico (30.3 percent) and all other participants by a factor of at least 10 to 1. Canada is not a member of the Institute. To provide a scientific basis for adoption of effective measures to prevent human cancer by identifying cancer-causing agents, assembling data on cancer cases and environmental factors from around the world, analyzing them, and disseminating data. U.S. share (percent) Public Law 92-484, approved October 14, 1972. The United States was one of the five original participating members and has remained a member since 1965. Bureau of International Organization Affairs, Department of State; the National Cancer Institute, the National Institute of Environmental Health Sciences of the National Institutes of Health, Department of Health and Human Services; the American Cancer Society; numerous cancer research agencies; and the general public. Provides ability to draw upon cancer research materials and resources from all over the globe, including areas usually inaccessible to U.S. officials. Brings together global experience on specific cancers and relation to causes. The United States separately has provided long-standing support for the International Agency for Research on Cancer (IARC) research in evaluating potentially carcinogenic substances in society and the workplace. Withdrawal effective 6 months after receipt of notification by the Director-General of the World Health Organization (WHO). Enjoys strong U.S. agency and congressional support. Narrow functional area (public affairs/literature dissemination) of possible overlap with WHO is currently being addressed for possible consolidation. It has a relatively small membership (16) that exerts budget pressure on organization but seeks to encourage increased membership through lower introductory charges. The United States, along with the United Kingdom, opposed 6.7 percent biennial budget increase adopted in April 1995. To translate and publish the customs tariffs of member governments and to disseminate this information to the members. U.S. share (percent) Authority is convention dated July 5, 1890 (26 Stat. 1518, TS 384). The U.S.-assessed share shall not exceed 6 percent per Public Law 90-569. Tariff translations are provided to the Department of Commerce; the Customs Bureau, Department of the Treasury, and the U.S. Trade Representative; as well as to private importers and exporters (administered by the bureaus of International Organization Affairs and Economic and Business Affairs, Department of State). The U.S. government and U.S. businesses benefit in having full information on foreign customs rates, regulations, and concessions obtained in negotiations available in English. The International Bureau’s translations provide a ready source of basic information needed for responding to questions from businessmen, in particular, in connection with U.S. export promotion programs, and for verifying foreign concessions obtained in negotiations. Per the convention, article 15, notice shall be given to the Belgian government. This is the only international organization that translates the individual country tariff schedules into English. It is therefore important, primarily to U.S. importers and exporters, that the U.S. government remain in this international organization (membership is available only to governments). WTO may at some time in the future provide this information, but the International Bureau is the only organization that does so at the present time. To provide the administrative framework to facilitate the arbitration of international disputes and maintain a worldwide registry of jurists and lawyers for selection to serve as needed on arbitration tribunals. U.S. share (percent) Convention for the Pacific Settlement of International Disputes, ratified by the Senate, April 2, 1908. The United States has been a member of the Permanent Court since 1899. Bureau of International Organization Affairs and the Office of the Legal Adviser, Department of State. Provides expert and cost-effective means to settle international disputes. The United States uses its facilities, as it did to organize the Iran-U.S. Claims Tribunal and in recent years to arbitrate a Heathrow Airport user fee dispute with Great Britain. One year following receipt of notification to withdraw. None. To cooperate with national scientific laboratories to ensure the international standardization of basic metric and nonmetric units of measure throughout the world. These standards have important bearings upon the exchange of goods and knowledge between countries. U.S. share (percent) The United States has been a participant since a convention creating an International Office of Weights and Measures was signed in May 1875. Bureau of International Organization Affairs, Department of State; National Institute of Standards and Technology (NIST), Department of Commerce; and Physics and engineering academicians. Provides access to a stable, accurate, and universally accepted system of measurement; promotes free trade; maintains and coordinates the world’s time scale; and plays an influential role in the development of industrial technology and international comparisons. One year after receipt of notification of intent to withdraw. Forfeits right of any joint ownership in international prototypes. The Bureau has a strong scientific orientation. It has tried unsuccessfully over the years to branch into commercial applications—which is what gave rise to the International Organization for Legal Metrology’s establishment. Effort to merge areas of common effort are being explored at the instigation of the French government. NIST, the designated U.S. national laboratory and a prime user of the Bureau’s services, provides calibration services for industry users on a cost-recoverable basis. To serve as a research and training center and as a clearinghouse for the exchange of information among specialists from around the world to initiate, develop, promote, and facilitate conditions for the conservation and restoration of cultural property. U.S. share (percent) Various public laws, January 1971. Bureau of International Organization Affairs, Department of State; the Smithsonian Institution; the President’s Advisory Council on Historic Preservation; the National Trust for Historic Preservation; and similar organizations, museums, and universities. Assists in important restorations/preservations, including the U.S. Capitol building and the Spanish missions of the American Southwest. Provides various mid-career professionals and students access to highly specialized instructional facilities and services not available elsewhere. Also, the major stakeholders value what they consider to be unparalleled connections made through the organization. One year following notification, provided its contribution payments are current. U.S. contribution rate (25 percent) from the International Center’s scale of assessments is based on 1 percent of the United Nations Educational Scientific and Cultural Organization (UNESCO) appropriation, not to exceed 25 percent (which the United States pays). This rate is more than double that of next highest participating country, Japan (12.38 percent). The United States successfully rolled back proposed budget increases for the 1996-97 biennium when the U.S. delegation joined other member states in approving the budget by consensus. To foster market transparency by collecting and publishing reliable data on copper production, consumption, and trade without intervening in markets. The International Copper Study Group (ICSG) also provides a forum for governmental consultations and supports special studies of market trends, new technologies, and government policies affecting the copper industry. U.S. share (percent) Authority is Public Law 103-236. The United States accepted the terms of reference of ICSG on March 15, 1990. ICSG was established on January 23, 1992. The International Trade Administration, Department of Commerce; the bureaus of International Organization Affairs and Economic and Business Affairs, Department of State; and the U.S. mining industry. Increased market transparency enables a competitive market to avoid large fluctuations in price and promotes a better balance between supply and demand (large price fluctuations have traditionally plagued the copper market). It has helped “lift the veil” of the copper industry in the former Soviet Union, which was of significant interest to U.S. industry. It aids members with effective forecasting and long-term planning. A member may withdraw 60 days after written notice is given to the United Nations and the ICSG’s Secretary-General. ICSG was negotiated at U.S. urging to provide better information to prevent market instability, as happened in the 1980s. It primarily benefits the copper industry, but the data provided and the intergovernmental consultation are useful to U.S. agencies, including the Commerce and Defense Departments. ICSG has financed research on potential health problems associated with copper in drinking water. ICSG publications are available for sale to anyone, not just to member countries. To compile and publish statistics on cotton production, trade, consumption, and prices; and to facilitate the exchange of information and the development of more open lines of communication among scientific workers to better understand research problems. U.S. share (percent) Authority is 70 Stat. 890, 1956, 5 U.S.C. 170j. (P.L. 94-350, July 12, 1976.) Initial date of participation was 1939. The Foreign Agricultural Service, Department of Agriculture; the bureaus of International Organization Affairs and Economic and Business Affairs, Department of State; and the U.S. textile industry. The International Cotton Advisory Committee (ICAC) provides cotton price analyses and projections to the international cotton community, something the Department of Agriculture is prohibited from doing. The U.S. cotton industry supports continued membership in ICAC and regularly attends ICAC plenary meetings. Member may withdraw by providing written notification before a new fiscal year (July 1). Although State announced the U.S. intention in December 1995 to withdraw from ICAC effective on June 30, 1996, the Federal Agriculture Improvement and Reform Act of 1996 (P.L. 104-127) required the President to ensure that the U.S. government participate in ICAC and State to continue to pay the assessed contribution. As a result, State rescinded its letter of intent to withdraw from the organization, and the United States will remain in ICAC. ICAC publications are available for sale to anyone, not just to member countries. To promote expansion of international trade in grains and secure the freest possible flow of this trade, and to provide a forum for the exchange of information and discussion of members’ concerns regarding trade in grains. Through the Food Aid Committee, donors pledge food aid in the form of grain, which some members buy from the United States. U.S. share (percent) The current authority for U.S. participation is Senate advice and consent to the International Wheat Agreement of 1986, on November 17, 1987. Initial U.S. participation was in 1942. The Foreign Agricultural Service, Department of Agriculture; and U.S. grain growers; the bureaus of International Organization Affairs and Economic and Business Affairs, Department of State; and the U.S. Agency for International Development. The United States, as the world’s largest exporter of grains, benefits from the expansion of international trade and from securing the freest possible flow of this trade. The United States also benefits from having the most reliable international data on the grains trade, including data provided by other countries, which would not otherwise be available. There are no specific provisions for withdrawal. Not acceding to the next convention, which will take effect in 1998, would be a way of withdrawing. The International Grains Council (IGC) is considering soliciting more private sector participation to relieve budget problems, but some countries fear for the integrity of the organization if industry interests are included. The U.S. government’s assessed share increased to 23.6 percent in 1996 because a new convention was negotiated that includes all grains and uses more recent data for calculating assessments. IGC publications are sold to anyone. To establish a close and permanent association with the hydrographic offices of member states, with a view to rendering navigation easier and safer throughout the world. U.S. share (percent) International hydrographic convention, approved by the Senate, May 13, 1968 (treaty). The United States has been a participant since 1922. Bureau of International Organization Affairs, Department of State; National Oceanographic and Atmospheric Administration, Department of Commerce; the U.S. Coast Guard; U.S. Geological Survey; National Imagery and Mapping Agency; (formerly Defense Mapping Agency); the U.S. petroleum industry; and oceanographic and academic institutions. Through the International Hydrographic Organization (IHO), the United States obtains high-quality hydrographic survey and chart data that is essential for safe navigation at sea, promotes trade, and reduces the threat of environmental damage from ship groundings. IHO’s President is a retired U.S. admiral. One year following the date of notification. None. To unify or harmonize private law in different countries, thereby facilitating international commerce and removing obstacles created by unnecessary conflicts in law and legal systems; and providing training in the adoption and use of approved international conventions by less developed countries. Initially established in 1926 under the League of Nations; present charter in effect since 1940. The United States has been a member and active participant since 1964. Bureau of International Organization Affairs and Office of the Legal Adviser, Department of State; Office of Foreign Litigation, Department of Justice; the American Bar Association; National Conference on Commissioners on Uniform State Laws; National Law Center for Inter-American Free Trade; American Law Institute; and other national legal organizations. Provides an important forum to ensure that U.S. commercial law and other legal interests are key source for international work on law unification, and that U.S. commercial practices are reflected in and protected under treaties and other documents produced in this process. Participation is for a period of 6 years. Intent to withdraw must be submitted in writing at least 1 year preceding the end of the current 6-year period (which expires in 1999). Two conventions prepared by the Institute on international commercial law reflecting modern U.S. practice are expected to be submitted to the Senate in 1997. The Institute is also drafting a multilateral convention expected to benefit the U.S. aircraft and other industries. To improve transparency in the lead and zinc world markets by producing and disseminating a wide variety of current statistics; to provide for an intergovernmental forum for consultations on international trade in lead and zinc; and to hold discussions of market trends, new technologies, government policies, and environmental issues. U.S. share (percent) Authority is 22 U.S.C. 2672, sec. 5 of Public Law 885, 84th Congress. U.S. participation started in 1960. International Trade Administration, Department of Commerce; the bureaus of International Organization Affairs and Economic and Business Affairs, Department of State; and the U.S. mining industry. It produces a wide variety of statistics, assisting in effective forecasting and long-range planning. These statistics are important to the operation of a competitive market, which should ensure the lowest possible prices to the U.S. consumer. Annual meetings provide a forum for industry/government contacts and discussion of concerns without political agendas or market intervention measures. A member may withdraw at any time by written notification to the Secretary-General. The withdrawal takes effect on the date specified in the notification. Membership in this organization appears to be more important to industry than to the U.S. government. The State Department considers this organization to be a model for similar organizations for other commodities. Publications are available to anyone. It also reports on environmental rules concerning lead and other environmental issues. To manages an international natural rubber agreement designed to stabilize price fluctuations and rubber supplies through maintenance of a buffer stock in a historically volatile market. Successive international rubber agreements, first entered into force in 1980. Bureaus of International Organization Affairs and Economic and Business Affairs, Department of State; Department of the Treasury; and Department of Commerce; the Office of the U.S. Trade Representative; and domestic tire, rubber, steel, and labor industries. With the United States as the world’s largest consumer of rubber products and rubber production being concentrated in three Southeast Asian countries (Indonesia, Malaysia, and Thailand), certain assurances are sought through an international agreement that attempts to stabilize natural rubber prices without disturbing long-term market trends and ensure expanded future supplies of natural rubber at reasonable prices. Agreement of 1987 has expired and a new 4-year extension has been negotiated. Under terms of old (and new) agreements, withdrawal permitted upon 1 year’s written notice. This is the only commodity agreement with economic provisions in which the United States currently participates. Extension of the agreement enjoys strong industry and congressional support, but it has not shown that it reduces long-term price variability or benefits U.S. consumers. Keeping a substantial sum (currently valued at $80 million) of U.S. funds with the International National Rubber Organization (INRO) or under foreign bank management (i.e., no direct U.S. control of the funds) to support a buffer stock operation under the agreement continues to be an unresolved issue. To recommend adoption of uniform international legal standards and requirements and provide an information exchange for scientific and measurement instruments that are used in commerce and industry. U.S. share (percent) Convention on legal metrology, as amended. The United States first participated in 1972. Bureau of International Organization Affairs, Department of State; NIST, Department of Commerce; and U.S. measuring instrument manufacturers. Uniform standards for measuring products in trade, public health, safety, and many other industries are considered essential for their public acceptance and confidence. Also vital for the protection of the import/export industries. International conferences/conventions are required once every 6 years but recently have been held every 4 years. Intention to withdraw must be made known at least 6 months in advance of expiration of the current convention/budget adoption (November 2000). A merger of operations with the International Bureau of Weights and Measures has been proposed by the French government. A working group is currently studying areas of common effort/interest with the objective of reducing costs and sharpening global focus. To collect and disseminate to government veterinary services facts and documents concerning the course and cure of animal diseases; to examine international disease control agreements and assist in their enforcement; and to promote disease research. U.S. share (percent) Senate approval, and presidential signature on June 9, 1975, of the original international agreement. Initial U.S. participation was in May 1976. Animal and Plant Health Inspection Service, Department of Agriculture; the bureau of International Organization Affairs, Department of State; U.S. Centers for Disease Control and Prevention; veterinary medicine; and the meat and poultry industries. The International Office of Epizootics (IOE) is a valuable channel for dissemination of U.S. research findings and helps apprise the United States of overseas research and animal infection developments. U.S. involvement allows the United States to have a prominent voice in developing international trade standards and regulations and conform them to U.S. standards. These standards help make trade without fear possible in this area. As the only international animal health forum in the world, IOE will set animal trade standards for the WTO. It also serves as an early warning system for animal disease outbreaks. Written notice given 1 year in advance of intention to withdraw. If the United States were to withdraw, standards would be set without U.S. participation and in the future might not conform to U.S. standards. This could greatly affect public health and industries that import and export U.S. animal livestock and animal products. To study wine and its production methods, packaging and labeling standards, and associated marketing practices with the object of ensuring product integrity and harmonizing regulatory requirements in the international wine trade. U.S. share (percent) Public Law 98-545 of October 25, 1984 (98 Stat., 2752). The United States began its participation in 1980. Its request for full membership was accepted on July 24, 1984. The Bureau of Alcohol, Tobacco and Firearms, Department of the Treasury; the bureaus of International Organization Affairs and Economic and Business Affairs, Department of State; U.S. vintners; and the California Winegrowers Association. The International Office of the Vine and Wine (IOVW) facilitates the global dissemination of information on the U.S. wine industry, thereby helping promote U.S. wine, brandy, and viticultural exports. It also aids in promoting product integrity, therefore helping to protect public health worldwide. Finally, intergovernmental channels of communication have helped to expedite resolution of international incidents involving trade impediments, contamination, and marketing fraud. Any member may withdraw after giving 6 months’ notice. If the United States were to withdraw, both U.S. industry and consumer protection interests will be left unrepresented. IOVW’s deliberations have significant trade consequences. Differences in acceptable production techniques (which can hinder or promote market access), primarily between European and U.S. wine makers; sanitary practices; labeling; and the presence of chemical products are the subject of IOVW standards. IOVW is petitioning for WTO recognition, which could make the IOVW’s resolutions binding (they are now optional) and backed by the WTO’s enforcement powers. To promote the understanding of long-term trends in future rubber (natural and synthetic) production and consumption, provide accurate statistics, and promote research. It also serves as a forum for consultation among principal producing and consuming countries. U.S. share (percent) Authority is 22 U.S.C. 2672, sec. 5 of Public Law 885, 84th Congress. Initial date of U.S. participation was 1944. Bureaus of International Organization Affairs and Economic and Business Affairs, Department of State; International Trade Administration, Department of Commerce; and the U.S. rubber industry. Quick dissemination of technical information on supply and demand promotes U.S. competitiveness. Information on market trends is important to the United States as the world’s largest rubber consumer. Also, the U.S. contribution leverages contributions from other members. The result is greater market transparency and efficiency, directly benefiting U.S. industry and consumers. The International Rubber Study Group (IRSG) also provides information on worldwide investment opportunities/new technologies. Withdrawal within the first 6 months of the financial year, which starts on July 1, becomes effective at year’s end (effectively 6 months’ notice). If withdrawal occurs within the second half, dues for the following year must still be paid (18 months’ notice). Membership in this organization appears more important to industry than to the U.S. government since it is industry that primarily uses the statistics provided by IRSG for long-range planning and projections. However, the U.S. government does use the information provided for planning and intergovernmental consultation purposes. Publications are available for sale to anyone, not just member countries. To develop official rules for testing seed sold in international trade, to accredit laboratories that issue international seed lot quality certificates, and to promote seed research and technology. U.S. share (percent) Basis for participation is 70 Stat. 890, 1956, 5 U.S.C.170j. Initial date of U.S. participation was 1924. Agricultural Marketing Service, Department of Agriculture; the bureaus of International Organization Affairs and Economic and Business Affairs, Department of State; U.S. agrobusiness; and U.S. land grant colleges. Membership in the International Seed Testing Association (ISTA) ensures that U.S. seed exporters have access to, and are competitive in, world markets through the use of approved uniform testing methods. Membership allows the United States to maintain its influence over the establishment of standards. It also ensures that high-quality imported seed is available to U.S. consumers and that U.S. testing facilities are accepted worldwide as meeting international standards. A government may withdraw by sending written notice to ISTA, but it will be responsible for its dues for that entire calendar year unless withdrawing because of a change in the ISTA constitution. Then the withdrawing government is responsible for its dues up to the change. Membership allows the United States to take part in the process of developing official procedures used to test seed sold in international trade. Withdrawal would deny the U.S. government the opportunity to block proposed international testing rules that could function as trade barriers to U.S. seed. ISTA generates about 40 percent of its operating funds from the sale of goods and services it produces. If ISTA allows additional labs to join as nonvoting members, as was proposed, it could result in lower U.S.-assessed dues. To increase transparency of the tropical timber market, promote sustainable management of tropical production forests, and promote research and development aimed at improving the sustainable management of tropical forests. U.S. share (percent) International Tropical Timber Agreement of 1983, signed by the United States on April 26, 1985. Office of the U.S. Trade Representative; bureaus of Economic and Business Affairs, International Organization Affairs, and Oceans and International Environmental and Scientific Affairs, Department of State; Forest Service and Foreign Agricultural Service, Department of Agriculture; and International Trade Administration, Department of Commerce. Improve availability of market information for U.S. importers of tropical timber for furniture, paneling, and other wood products. Also, ITTO identification of markets for lesser-known species promotes better utilization of resources and provides consumers with greater variety, which helps keep consumer costs down. The United States participates in ITTO’s voluntary program, but its contribution is expected to decrease to $200,000 from about $1 million annually. A member may withdraw 90 days after written notice is received by the United Nations (notice must also be given simultaneously to the ITTO council). U.S. officials believe issues discussed in ITTO, such as certification and labeling of wood products, apply to wood products from all types of forests including temperate forests. They believe that decisions on these issues could have a significant impact upon the global competitiveness of the U.S. timber industry. The United States also has a strong interest in promoting the sustainable management of tropical forests through ITTO because of the relationship of tropical forests to global environmental problems. The leader in influencing, encouraging, and assisting governments and nongovernmental organizations throughout the world to conserve the integrity and diversity of nature and to ensure that any use of natural resources is equitable and ecologically sustainable. (est.) U.S. share (percent) State Department Authorization Act for Fiscal Years 1990 and 1991 (P.L. 101-246). Bureaus of International Organization Affairs and Oceans and International Environmental and Scientific Affairs, Department of State; Fish and Wildlife and National Park Service, Department of the Interior; U.S. Forest Service, Department of Agriculture; National Oceanic and Atmospheric Administration, Department of Commerce; Environmental Protection Agency; the U.S. Agency for International Development; and various environmental organizations. Supports U.S. goals for the maintenance of a healthy, natural global environment and conservation of biological diversity. Supports international conservation conventions of importance to the United States. Provides a unique forum for the coordination of governmental and nongovernmental conservation efforts regarding the use of natural resources and leveraged assistance to international networks of volunteer scientists and specialists. Any time, upon receipt of written notification. Modest assessed contribution is highly leveraged since the International Union for the Conservation of Nature (IUCN) receives about 90 percent of its funding from various contribution sources other than assessed membership dues. Comparatively small (nine) “state” membership provides about 40 percent of IUCN’s assessed budget. In addition to its assessed contribution, the United States provided a voluntary contribution in the amount of $1 million in fiscal year 1995 to support programs of particular interest. To be the focal point for worldwide parliamentary dialogue and to works closely with the United Nations for peace and cooperation among peoples and the firm establishment of representative institutions. The Interparliamentary Union (IPU) is comprised of the world’s parliamentary bodies. U.S. share (percent) Various public laws. United States has been a member since the first meeting in 1889. Bureau of International Organization Affairs, Department of State; the Clerk of the House of Representatives; and the Secretary of the Senate, Parliamentary Services. Promotes personal contact and dialogue between members of the world’s parliamentary bodies—especially emerging democracies—in a formal, secure, but neutral structure to discuss legislative functions and relations and universal values, peace, and cooperation. No withdrawal provision cited. U.S. participation in IPU is within the provenance of the Congress and not a matter for executive branch decision-making. Responsibility shifts each Congress and now rests with the House of Representatives (administered by the Clerk). IPU has sought to raise the U.S. assessment from 12.58 percent to 15 percent, or above the statutory limitation. No Senator has attended any IPU meeting since 1989. No Member of the House has attended any IPU meeting since March 1994. IPU funding was temporarily suspended in December 1995—but subsequently approved—pending IPU reversal of the assessment increase and adjustment of its meeting schedule to better accommodate U.S. participation (meetings are normally scheduled at times when the Congress is in session). To analyze road and road transport policy issues as an aid to national decisionmakers, to encourage research and exchange of information on research results and best practices, to disseminate findings, and to address the concerns of all members. U.S. share (percent) Authority is sec. 164, Public Law 102-138, approved October 28, 1991. The United States regained membership in the World Road Association (WRA) (it lapsed during World War II) in November 1989. Original justification of 22 U.S.C. sec. 269 (44 stat. 754, June 18, 1926) is still valid. Federal Highway Administration, Department of Transportation, and U.S. construction companies. WRA, as the only intergovernmental forum for road issues, has provided ready access to innovations developed abroad that can be applied in the United States. Significant savings accrue to the United States because other countries share their research with the U.S. government through WRA. Also, the U.S. government and industry can increase international awareness of U.S. technical expertise for the purpose of encouraging the export of U.S. goods and services, making U.S. businesses more competitive overseas. WRA’s governing commission accepts resignations based on convention provisions. The American Association of State Highway and Transportation officials pays about one-third of the U.S.-assessed contribution, which the federal government would otherwise have to pay. The Federal Highway Administration pays for extra budgetary projects and, beginning in fiscal year 1997, it will pay the U.S. government-assessed contribution. The following are GAO’s comments on the Department of State’s letter dated December 20, 1996. 1. We acknowledged in our draft report that State had conducted a comprehensive review in 1995 to determine whether international organizations served important U.S. interests and whether continued U.S. membership in them was warranted. However, because the results of this effort were not (1) formally documented in State’s records; (2) made available to us; or (3) reported to the Congress at the time of our review, we could not assess the completeness of State’s evaluation. 2. We agree that the basis for the decision to withdraw from certain organizations was within the range of criteria that State announced in May 1996 and, therefore, have modified our report. 3. Because we believe that State’s December 1996 report to the Congress is a step in the right direction, we are not making any recommendations at this time. (We have not reprinted attachment A, state’s December 1996 report.) 4. In finalizing this report, we categorized the organizations in accordance with the broad priority categories used in State’s December 1996 report to the Congress, rather than be whether the organizations served a “broad” or “narrow” interest. 5. We have clarified our report language. 6. While we do not doubt that ICCROM is a unique organization that provides valuable benefits to some U.S. agencies, some U.S. government officials have questioned whether the cost of belonging to this organization may not be disproportionately high when weighed against the national interest. 7. We revised the report to reflect this information; however, we believe that there are some areas of overlap between these organizations. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO obtained information on U.S. government membership in 25 special-purpose international organizations and 2 inter-American organizations that received funding support of $10.8 million in 1995 through assessed contributions provided by the Department of State, focusing on: (1) the Department of State's efforts to assess whether U.S. government membership in these organizations continues to serve U.S. interests, including a summary description of the organizations' missions and issues that have been raised about the benefits of U.S. membership; and (2) steps that have been taken to keep the government's contribution costs low. GAO noted that: (1) in May 1995, State began a comprehensive interagency assessment of U.S. membership in all of the international organizations to which it makes assessed contributions; (2) in May 1996, after being urged by Congress to prioritize its funding requirements for international organizations, State announced the criteria that it had used in 1995 in reviewing and evaluating U.S. membership in international organizations; (3) these criteria included the extent to which the United States directly benefits from the organizations' activities, how much of the organizations' budgets are devoted to activities benefitting the United States, the scope and depth of the organizations' constituencies, and their responsiveness to management improvement efforts; (4) in December 1996, State reported to Congress its decisions concerning the allocation of funds from the Contributions to International Organizations account for fiscal years 1996 and 1997 based on an assessment and prioritization of U.S. interests in these organizations; (5) State categorized the organizations according to a priority ranking based on the importance of their mandates to the U.S. national interest and their cost-effectiveness; (6) none of the 27 organizations discussed in this report were in State's top priority category, 4 were in State's second priority category, and 20 were in the third priority category; (7) GAO's interviews with U.S. agency officials indicate that all of the 27 organizations appear to have missions that are broadly consistent with a U.S. interest, but there were mixed views as to the value of the benefits the United States receives from membership; (8) the key concerns raised included the cost of membership in some organizations relative to the benefit received and that some organizations primarily benefit their related industries; (9) State has attempted to keep the U.S. government's assessed contributions to the special-purpose international organizations low; (10) it has sought actual reductions in their budgets, established a systematic coordination process with U.S. agencies having lead programming responsibility, and tried to secure more private sector contributions to these organizations; and (11) however, according to State officials, private financing of membership dues for these international organizations is generally not a viable option under their existing charters or State's funding policy.
|
For the purposes of this review, we use the term “contingency construction” to describe any construction, alteration, development, conversion, or extension of any kind carried out with respect to a military installation in support of contingency operations. Different organizations and personnel within DOD would consider different categories of projects to be contingency construction, reflecting the project type or categorization that is most relevant to their function. Although DOD does not have a consistent definition for what constitutes a “contingency construction” project, officials from various DOD entities identify and describe contingency construction projects based on criteria including location, funding source, statutory authority, construction standards, and a facility’s intended use. Specifically: Location. Contingency construction projects may be identified by their geographic location (such as a country or region) or as those occurring at contingency locations, which DOD defines as non- enduring locations outside of the United States that support and sustain operations during named and unnamed contingencies or other operations as directed by the appropriate authority and are categorized by mission life-cycle requirements as initial, temporary, or semi-permanent. Funding Source. Contingency construction projects may generally be identified by the source of funding, such as the “overseas contingency operations” portion of the budget, which may include MILCON and O&M appropriations. Statutory Authority. Contingency construction projects may be identified by the statutory authority used to undertake the construction project. For example, Contingency Construction Authority is a statutory authority specifically associated with contingency construction operations. Construction Standards. Contingency construction projects may be identified by the construction standard used, such as those construction standards specified for contingency locations in CENTCOM guidance. Facility’s Intended Use. The purpose of the construction—whether specifically for contingency operations or for some degree of use for contingency operations—might be considered when identifying contingency construction projects. DOD uses various statutory authorities to carry out military construction projects, including contingency construction projects, and uses MILCON and O&M appropriations to fund the construction. The statutory authorities for military construction projects, several of which DOD has used for contingency construction in the CENTCOM area of responsibility to support contingency operations in Iraq and Afghanistan, are outlined in table 1. Appendix II provides further details on these authorities. Table 1 shows that DOD may use MILCON appropriations under five of the six statutory authorities and may use O&M appropriations under two of the six authorities. Depending on a project’s cost, DOD may use either MILCON or O&M appropriations for unspecified minor military construction. In addition to using O&M appropriations for unspecified minor military construction, DOD may also use O&M appropriations for projects under the Contingency Construction Authority. To distinguish between the two statutory authorities that may use O&M funding, for the purposes of this review we refer to O&M-funded projects undertaken using section 2805 of Title 10, U.S. Code (Unspecified Minor Military Construction authority) in support of contingency operations as “O&M- funded unspecified minor military construction projects” and to O&M- funded projects using Contingency Construction Authority as “Contingency Construction Authority projects.” CENTCOM and its component commands have key roles and responsibilities for contingency construction within CENTCOM’s geographic area of responsibility. CENTCOM is one of six combatant commands that have a defined geographic area of responsibility, which is a specific region of the world where the combatant commanders plan and conduct operations. Figure 1 shows CENTCOM’s area of responsibility, which includes Iraq and Afghanistan. CENTCOM is responsible for assessing the operational environment at critical milestones to determine contingency basing requirements and designating or recommending to the Chairman of the Joint Chiefs of Staff the lead service component for managing a contingency location. CENTCOM, through the command engineer, is also responsible for coordinating with service components to develop construction project priorities and for establishing theater contingency construction standards. CENTCOM provides its plans for activities and operations in theater (e.g., an engineer support plan, a theater campaign plan, etc.) to its service component commands, such as the Army Central Command and the Air Force Central Command. Under the Joint Lessons Learned Program, CENTCOM is also responsible for providing and maintaining support for theater-specific joint and interoperability lessons learned activities. Military Departments. The military departments develop, review, approve, and submit proposed construction projects identified by the combatant commands and service component commands in their annual budget justification materials. The lead service component command for a contingency location is to ensure that the location’s construction projects support the mission and tenants, which are driven by the plan CENTCOM provides. According to Army Central Command and Air Force Central Command officials, in developing the needed footprint for a contingency location, the service component commands identify construction projects and define level-of-construction requirements to provide the shelter and space needed to conduct planned operations. Once developed, according to Army Corps of Engineers officials, the lead service submits those projects to CENTCOM for review and validation. After CENTCOM and its component commands have validated a construction project, the service component command conveys project details, including the level of construction needed, to the Army Corps of Engineers for projects exceeding $1 million. The Army and Air Force have delegated approval for unspecified minor military construction projects below that level to the service component commanders and subordinate commands, including the installation commander in the case of the Air Force. Once appropriations are received, the military departments provide funds to DOD construction agents to be used for approved construction projects. Army Corps of Engineers. The Army Corps of Engineers is the designated DOD construction agent for CENTCOM’s area of responsibility. As such, it is responsible for performing design and construction services for MILCON-funded projects and service component-requested O&M-funded projects. Additionally, it is responsible for obligating, expending, and accounting for MILCON and O&M funds for assigned projects. According to Army Corps of Engineers officials, when performing design and construction services, the functions of the construction agent include estimating the cost of construction projects in the CENTCOM area of responsibility to meet level-of-construction requirements determined by the service component commands. When the volume of construction projects exceeds the Army Corps of Engineers’ personnel capacity for managing projects, it may call upon the Air Force Civil Engineer Center to manage the design and construction for some projects in the CENTCOM area of responsibility. Various Office of the Secretary of Defense organizations and the Joint Staff also have roles and responsibilities related to contingency construction. The Under Secretary of Defense for Acquisition, Technology, and Logistics exercises general oversight of the military construction program and has been delegated certain statutory authorities of the Secretary of Defense. The Office of the Assistant Secretary of Defense for Energy, Installations, and Environment is, among other things, responsible for administering the provisions of DOD Directive 4270.5, regarding military construction, including issuing implementing guidance. Additionally, it is to monitor the execution of the military construction program to ensure the most efficient, expeditious, and cost-effective accomplishment of the program by DOD construction agents. Furthermore, it is responsible for developing DOD-wide master planning policy; facilities and construction standards; and real property accountability policy for contingency basing. The Under Secretary of Defense (Comptroller) submits budget justification materials annually to Congress, identifying construction projects to be funded and their cost. For major military construction projects specified in the National Defense Authorization Act, the Comptroller also reports on the status of funds appropriated for each project, including obligations and disbursements. Additionally, the Secretary of Defense has delegated approval authority for the use of Contingency Construction Authority to the Under Secretary of Defense (Comptroller). The Chairman of the Joint Chiefs of Staff, in coordination with the combatant commanders, is responsible for assigning priority among competing requests from the combatant commands for military construction projects using certain authorities. The Chairman of the Joint Chiefs of Staff also reviews combatant command recommendations for the designation of a lead service for each semi- permanent contingency location and provides a recommendation to the Under Secretary of Defense for Acquisition, Technology, and Logistics. Since contingency operations began in Iraq and Afghanistan, DOD has not tracked the universe and cost of all CENTCOM contingency construction projects supporting operations there. Although DOD does not track all contingency construction projects separately from all other DOD projects in the CENTCOM area of responsibility, DOD maintains consolidated financial records of all MILCON projects and has been able to generate more specific data on contingency construction projects when requested. DOD was until recently required to track the universe and cost of O&M-funded projects supporting operations in Iraq and Afghanistan using the Contingency Construction Authority—one of two statutory authorities using O&M funding. However, senior DOD officials stated that they do not track and so were unaware of the magnitude of their use of O&M funds for projects under the other statutory authority—section 2805 of Title 10, U.S. Code—projects that we found constituted a substantial segment of overall contingency construction. According to senior DOD officials, DOD is not required to track the universe and cost of those projects. DOD has routinely used O&M funding for these projects to more quickly meet requirements because the MILCON review process can take up to 2 years. However, in some instances, DOD's use of O&M funding has posed financial, operational, and duplication risks. The department does not track MILCON-funded contingency construction projects separately from other MILCON-funded construction projects. According to senior department officials, DOD is not required to track contingency construction projects separately from all other DOD projects and any MILCON projects supporting contingency operations are managed sufficiently within the standard DOD processes used for all military construction. For the CENTCOM area of responsibility, the department maintains consolidated financial records on MILCON projects, whether or not those projects support contingency operations, and has been able to generate more specific data on contingency construction projects when requested. Comptroller officials also stated that the department accounts for construction costs at the level authorized and appropriated by law. Specifically, the department captures obligation and disbursement data for MILCON projects in a monthly report of budget execution data for the period that funds are available for obligation plus 5 additional years. For example, DOD’s December 2015 monthly report reflected obligations of $1.4 billion for projects funded with overseas contingency operations MILCON funds in the CENTCOM area of responsibility from fiscal year 2010 through fiscal year 2016. According to Comptroller officials, obligations and disbursements for projects prior to this period—for which accounts have been closed—are not retained in an automated system; therefore, reconstructing these data would be an intensive manual effort. A senior official in the Office of the Assistant Secretary of Defense (Energy, Installations, and Environment) stated that DOD does not expend resources to track contingency construction project expenditures at a level of detail beyond what is required by Congress and instead relies on data queries should this level of detail be required. For example, according to a Comptroller official, in 2012 DOD responded to a request from the House Appropriations Committee, Security and Investigations Subcommittee, to provide data on obligations and disbursements for military construction in Iraq and Afghanistan. DOD was able to collect the requested data through data queries of the Defense Finance and Accounting Services and DOD Comptroller databases. The data indicated that as of September 30, 2012, the department had obligated $4.2 billion in MILCON funding (both base and overseas contingency operations funding) for specified military construction projects, as well as $1.3 billion in O&M funding (under Contingency Construction Authority), from fiscal years 2004 through 2012. DOD was until fiscal year 2016 required to track the universe and cost of projects supporting contingency operations in Iraq and Afghanistan using the Contingency Construction Authority, one of two statutory authorities using O&M funding. According to Comptroller officials, DOD has been able to generate obligation and disbursement data for Contingency Construction Authority projects funded with O&M under the Contingency Construction Authority established by section 2808 of the National Defense Authorization Act for Fiscal Year 2004 (as amended). Specifically, DOD has maintained records for all 112 projects constructed under this authority since fiscal year 2004, when it was established. For these projects, the department has maintained a cumulative record of obligations and expenditures to fulfill the statutory requirement for reporting this information to congressional committees on a quarterly basis. As of September 2015, DOD had obligated and expended $1.4 billion in O&M funds using the Contingency Construction Authority from fiscal years 2004 through 2015. However, according to senior DOD officials, DOD is not required to track contingency construction projects funded with O&M appropriations under the other statutory authority, section 2805 of Title 10, U.S. Code, separately from all other DOD projects. Senior DOD officials stated that they were unaware of the magnitude of their use of O&M funds for unspecified minor military construction projects in the CENTCOM area of responsibility because DOD did not track the O&M-funded contingency construction projects using that authority. During the course of our review, we found that the Army, which programs the majority of these O&M- funded unspecified minor military construction projects in the CENTCOM area of responsibility, had not tracked or documented these projects and was unable, therefore, to provide us with a comprehensive list accounting for them. DOD officials from other organizations, including the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Office of the Under Secretary of Defense (Comptroller); CENTCOM; the Army Central Command; the Air Force Central Command; the Army Corps of Engineers; and the Air Force Civil Engineer Center also could not provide us with a comprehensive list of O&M- funded unspecified minor military construction projects in the CENTCOM area of responsibility. Office of the Assistant Secretary of Defense for Energy, Installations, and Environment; Joint Staff; and CENTCOM officials told us that accounting for these projects was a service responsibility or was otherwise left to the services. According to Army Central Command officials, a list could be developed using information from operating bases where the construction occurred; however, most of the bases in Afghanistan and Iraq have been closed and locating such information would be problematic. For example, though O&M-funded contingency construction project files for fiscal years 2009 through 2010 for construction projects in Afghanistan are located in hard copy in filing cabinets at Army Central Command headquarters at Shaw Air Force Base, South Carolina, neither U.S. Forces-Afghanistan nor Army Central Command could provide records prior to 2009. Further, according to a U.S. Forces-Afghanistan official, an effort to review, collect, and analyze historic construction project data after the fact would be too resource-intensive given the drawdown of operations in Afghanistan and the other higher priorities occupying the limited U.S. Forces-Afghanistan personnel available to undertake such an effort. Absent a comprehensive list of DOD’s O&M-funded unspecified minor military construction projects, we used the limited information available to identify O&M-funded unspecified minor military construction projects supporting operations in Iraq and Afghanistan, and found that these projects constituted a substantial segment of overall contingency construction. Specifically, using available U.S. Forces-Afghanistan information for fiscal years 2009 through 2012, we identified records indicating that the command had approved at least $944 million in O&M funding for 2,202 of these projects in Afghanistan alone. This use of O&M funding appears significant when compared with the $3.9 billion DOD reported as enacted for other construction projects in Afghanistan over the same period using MILCON funding. Further, the 2,202 contingency construction projects we identified in the U.S. Forces- Afghanistan data may not include all construction projects funded under section 2805 of Title 10, U.S. Code, in Afghanistan during fiscal years 2009 through 2012 because, according to Army Central Command officials, U.S. Forces-Afghanistan delegated authority to its four regional commands to approve and fund projects independently. Therefore, the $944 million in O&M funding we identified may not include construction projects independently approved at the regional command level during this period. Additionally, Army Central Command officials were not able to provide information on O&M-funded unspecified minor military construction projects in Afghanistan prior to 2009, as discussed earlier. Nor were they able to provide this information for projects in Iraq and other countries in their area of responsibility for all fiscal years where, according to Army Central Command officials, O&M-funded construction activities took place. During the course of our review, we shared the results of our analysis with DOD officials, who agreed that the amount of O&M funding we identified constituted a significant segment of contingency construction expenditures. Army Central Command officials further noted that on the basis of their experience the costs that we had identified were likely conservative relative to the universe of O&M-funded unspecified minor military construction projects in the CENTCOM area of responsibility. These officials told us that it is likely that the majority of contingency construction projects are funded as unspecified minor military construction projects using O&M appropriations. Further, Army Central Command officials acknowledged that while individual projects may not warrant tracking on the basis of their specific construction cost, collectively across all projects the amounts are likely to be more significant, as was the case with the $944 million we identified. According to GAO’s Standards for Internal Control in the Federal Government, management should design control activities to achieve objectives and respond to risks by, for example, clearly documenting all transactions and other significant events in a manner that allows the documentation to be readily available for examination. DOD’s O&M-funded unspecified minor military construction projects collectively constitute significant events and, therefore, DOD’s control activities should include a means for documenting and tracking these projects. According to a senior official from the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment and senior DOD Comptroller officials, DOD does not plan to collect and analyze data on these O&M-funded projects, either in the CENTCOM region or in any other location. The officials noted that, while DOD could invest resources to track and document how much O&M funding they have used and are using for construction projects to support contingency operations, current DOD systems and processes are not set up to automatically provide this level of detail for these projects. Further, they noted that without changing DOD’s current systems and process, identifying this information would be resource and labor intensive. However, Army Central Command officials noted that each project undertaken using O&M funding for construction under the authority of section 2805 of Title 10, U.S. Code, requires a documented identification and classification of a project’s estimated construction costs and a legal determination to validate the base commander’s construction cost estimates for each project, to ensure that the $1 million maximum is not exceeded. While we recognize that locating all records of construction costs for completed construction projects at this point would be problematic, data on the construction costs for ongoing and future projects should continue to be readily available at the time of a project’s approval decision. Base commanders could therefore compile these readily available cost data and report them through the chain of command, for example, to the Under Secretary of Defense (Comptroller) and other decision makers. Given the magnitude of these O&M funds we identified that DOD used for contingency construction projects in Afghanistan in fiscal years 2009-12, establishing a means to track and document information on the universe and cost of all ongoing and future unspecified minor military construction projects funded with O&M would improve DOD’s ability to manage and oversee funds made available for such projects using O&M funding. Further, GAO’s Standards for Internal Control in the Federal Government states that management should use quality information to achieve the entity’s objectives—such as executing construction responsibilities and administering funds—by, for example, designing a process that identifies the information requirements needed. In the context of O&M funds, which are available for a variety of functions including construction, quality information on the use of O&M for construction activities in the contingency environment would be helpful for understanding the overall cost of contingency operations and the availability of funds for other operational purposes. Clearly tracking O&M- funded unspecified minor military construction projects is important for administering O&M funds and determining the funding needed to support operations in Iraq and Afghanistan, as well as for projecting funding needed for future contingency operations. DOD officials agreed that without comprehensively tracking and documenting unspecified minor military construction projects funded with O&M appropriations, the military services and other stakeholders are limited in their ability to manage and oversee funds made available for military construction, including contingency construction projects. Without information on the universe and cost of these projects funded with O&M, the military services cannot maintain awareness of how much O&M funding they are using for construction projects to support contingency operations versus other O&M-funded operational requirements. CENTCOM commanders have frequently relied on O&M funding to support contingency construction projects because, according to officials, O&M-funded projects take less time from development through construction than do MILCON-funded projects. However, this reliance on O&M funding has the potential to create financial, operational, and duplication risks. Due to the urgency of contingency operations, CENTCOM personnel must often construct facilities as rapidly as possible in their area of responsibility. For example, CENTCOM Regulation 415-1 notes that contingency basing locations support immediate but temporary contingency operations. It also states that O&M funds will be used to the maximum extent possible. However, for projects exceeding a cost of $1 million—the maximum amount currently available for O&M-funded projects under section 2805 of Title 10, U.S. Code—base officials in the CENTCOM area of responsibility stated they do not have a funding process that adequately supports contingency construction projects needed within a short time frame, since MILCON-funded projects can take up to 2 years for review and approval in addition to the time needed to complete construction. CENTCOM officials noted that a construction project can use either MILCON or O&M funding, and should be designed to address a single construction requirement. Under general construction authorities (i.e., major military construction specified in the National Defense Authorization Act and unspecified minor military construction under section 2805 of Title 10, U.S. Code), commanders must use MILCON funding for projects costing more than $1 million ($750,000 prior to fiscal year 2015). Army Central Command officials, however, stated that MILCON-funded projects can take 12 to 18 months to develop and submit, 12 to 24 or more months to review and approve, and 18 to 24 months to construct, equating to about 3 to 5 years in total before a project is completed and in use. By comparison, commanders can use O&M funding to meet construction requirements for projects at or below that maximum, and such projects can usually be reviewed and approved at the component or subordinate command level in 2 to 3 months and constructed in less than 1 year. Officials noted that even unspecified minor military construction projects using MILCON funds involve a lengthy review process, and commented that commanders seeking to use these funds must compete with projects from around the world within their respective service for a relatively limited amount of funding. In addition to the general construction authorities, DOD may use other authorities for construction projects in emergency and contingency circumstances. According to senior officials in the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment, these authorities can provide a means for funding contingency construction projects that exceed the O&M-funding maximum. For example, according to these officials, in an extraordinary instance the department could review, approve, and fund a contingency construction project in as few as 60 days using the Contingency Construction Authority. Nonetheless, these officials acknowledged that this process is still time-consuming in the eyes of commanders. Further, while service component and base officials in the CENTCOM area of responsibility acknowledged that these authorities are available and can be used in certain instances, they view them as inadequate because of the time required to get projects approved. Specifically, according to Office of the Assistant Secretary of Defense for Energy, Installations, and Environment and Army Central Command officials, these authorities involve an approval process from higher military department headquarters and DOD similar to that required under general construction authorities that can be lengthy (6 months or longer) and involve considerable DOD and congressional scrutiny. According to Army Central Command officials, in some instances, use of these other authorities also involves a request to reprogram funds, thereby adding another 3 to 8 months to the process. Officials noted that units on relatively short rotations (about 9 to 10 months) may no longer need the project by the time construction begins. Further, officials noted that commanders may perceive these authorities as requiring competition among various projects for funding, sometimes on a worldwide basis, and as a result believe that they will be unable to obtain approval. However, when using O&M funds for construction, base commanders must be careful as they consider the scope of a project, particularly when developing multiple projects to address similar requirements or an overarching or single requirement. Specifically, section 2801 of Title 10, U.S. Code provides that a military construction project includes all military construction work necessary to produce a complete and usable facility or a complete and usable improvement to an existing facility. GAO and the military departments have noted that the construction of a single “complete and usable” facility or project may involve the construction of several related buildings, structures, or other improvements to real property. As GAO has previously noted, the key factor is that a single building, structure, or other improvement could not satisfy the need that justified carrying out the construction project. Military department guidance provides that a single project or requirement may not be split into smaller projects solely in order to stay below the funding “threshold” (i.e., maximum). Whether multiple buildings should be programmed and funded as one project is a case-by-case determination that depends on various factors. However, multiple construction projects in support of a similar requirement may raise funding concerns or, in extreme cases, result in a violation of the Antideficiency Act. During our site visits to CENTCOM bases, officials told us that using O&M funding for projects is the quickest option available to address immediate contingency construction requirements. However, during the course of our review, we found instances of contingency construction requirements that might have entailed projects with construction costs above the $1 million maximum ($750,000 prior to fiscal year 2015) for O&M-funded projects but that, according to officials, needed to be completed more quickly than would have been possible under the existing MILCON review and approval process, which can take 2 years. While the extent of DOD’s use of the practice is unknown because DOD has not tracked the universe and cost of O&M-funded unspecified minor military construction projects, we identified examples where commanders had modified a project’s specifications or where commands had developed multiple projects below the O&M maximum to address a single requirement, which could then be completed more quickly. DOD’s reliance on O&M funding in these instances increased the risks of (1) potential concerns regarding the appropriate use of funding, (2) negative operational impacts, and (3) unnecessary duplication of effort. Following are the examples that we identified where commanders had modified a project’s specifications or commands had developed multiple projects to address similar requirements or an overarching or single requirement, potentially raising concerns or underrating risk regarding the appropriate use of funding: In August 2010, base officials at Bagram Airfield, Afghanistan, identified the need for additional housing at the base and designed 28 projects for the construction of concrete shelters—referred to as B- huts—classifying the project costs as construction costs. As the projects progressed, contingency-related changes resulted in base officials combining the 28 projects into 6 larger projects. Moreover, concurrent with the combination of the projects, base officials also modified the project specifications by re-designating the B-huts as “relocatable buildings,” the costs for which were then classified as other-than-construction. These actions significantly reduced costs designated as construction for each of the 6 larger projects putting them below the general $750,000 maximum for O&M funded projects in effect in 2010, after which base officials used O&M funds to finance their construction. Nonetheless, subsequent to the completion of the concrete shelters the department reported in September 2015 that it should have used MILCON funds to construct the shelters and determined that the obligations incurred for the projects had exceeded the statutory limit for O&M-funded unspecified minor military construction projects, thus resulting in a violation of the Antideficiency Act. In October 2009, Forward Operating Base Leatherneck officials identified a requirement for a headquarters building for a Marine Wing Support Squadron, which they estimated would have a total project cost of $847,491. Officials classified $740,193 of this amount as construction and the remainder as non-construction costs. However, the items classified as non-construction included a $44,600 generator used to power the building. According to Army Regulation 420-1, generators affixed as a permanent part of a facility that provide power to the facility are classified as real property and should be funded with military construction funds. If the generator for this project had been properly classified as construction, the project’s construction costs would have been $784,793, which exceeded the general $750,000 O&M maximum in effect at that time. In this instance, it is unclear why base officials did not classify the attached generator as part of the construction cost for the project. However, such circumstances have the potential for raising concerns about the appropriate use of funds. In October 2009, anticipating a large surge in personnel beyond Kandahar Airfield’s capacity, Regional Command South, a component of the U.S. Forces-Afghanistan, identified an operational requirement to construct additional housing for these personnel. Instead of planning, designing, and constructing the housing as a single, large MILCON project to address the requirement, Regional Command South programmed six separate, smaller, company-size projects with $655,685 each in construction costs. Regional Command South then used O&M funding to finance the construction of each of the smaller projects. If the additional housing were constructed as a single project (i.e., the construction costs from all six projects were combined), the likely total construction cost, $3,900,000, would have exceeded the general $750,000 O&M maximum in place at the time and would have required the use of MILCON funds. Although Army documentation identified each project as a complete and useable facility and noted advantages to dividing the overall housing requirement at the company level, the practice of dividing a requirement into separate, smaller projects could raise concerns about the appropriate use of funding. Following are the examples that we identified where commanders had modified a project’s specifications or commands had developed multiple projects to address a single requirement and in the process had created an operational risk—that is, had risked negatively affecting DOD’s ability to efficiently or effectively achieve operational objectives: In 2015, officials at a base in Southwest Asia divided a single requirement for a critical air control facility into four separate projects for four separate buildings—each of which cost $650,000—instead of one project for a single building that would have exceeded the $1 million O&M funding maximum. According to base officials, the four- building design does not align with the design of similar air control facilities elsewhere. Moreover, these officials also stated that housing the facility in four separate buildings is suboptimal because it does not fully enable the integration of operations and maintenance functions and could, therefore, negatively affect the operational capability of the facility. Nevertheless, given the urgency and importance of the capability the facility provides, base officials stated that they could not wait for MILCON funding for a single project and building. In addition to the operational risk, this practice also has the potential for raising concerns about the appropriate use of funds. In June 2015, officials at an air base in Southwest Asia identified a requirement for and designed an unmanned aerial vehicle shelter at an estimated cost of $377,000. This amount did not exceed the O&M maximum but did exceed the air base commander’s approval authority for O&M-funded construction projects, which was $100,000. Consequently, in order to complete the project quickly, according to base officials, they changed the scope of the project to keep the construction costs within the base commander’s $100,000 approval authority. Specifically, they reduced construction costs by removing the concrete floor and asphalt taxiway from the project’s scope, replacing them with temporary flooring. Base officials estimated that the re-scoped project would cost $97,000. According to base officials, while reducing the project’s scope in this manner is a common practice, in this instance the removal of the asphalt taxiway increases the risk of damage to the unmanned aerial vehicle’s landing gear and electronic sensors when it is moved in and out of the shelter. Had base officials been able to design and construct the project as originally intended, this risk to the unmanned aerial vehicle’s operational capability would have been mitigated. In May 2014, the Air Force identified a requirement for a new air passenger terminal at Ali Al Salem Air Base, Kuwait, because the harsh environment and heavy passenger traffic had deteriorated its existing facilities and they were no longer adequate to sustain the mission. The requirement included space for receiving and processing 6,500 personnel a month, with baggage; briefing and holding areas; and a U.S. Customs processing area. According to base officials, a single building housing these three functions would be preferable because the activities are sequential and are best performed indoors without having to travel between buildings. However, to do so would have required MILCON funding because the total would have exceeded the $750,000 O&M maximum in effect at that time. According to base officials they divided the requirements into three projects for (1) an air passenger terminal for $660,000; (2) a baggage control center for $527,000; and (3) a customs processing facility for $660,000—totaling about $1.8 million. As a result, according to base officials, terminal operations will be negatively affected by the unnecessary movement between three buildings, which will likely increase processing time for passengers and baggage. Further, this practice also has the potential for raising concerns about the appropriate use of funds. Following are the examples that we identified where commanders had modified a project’s specifications or commands had developed multiple projects to address a single requirement, or relied on O&M funding in other ways, and in the process had created the duplication risk of unnecessarily providing the same service to the same beneficiaries: According to Al Udeid Air Base officials, in 2015 base officials decided to move the base’s North Squadron operational and administrative facilities to another location on the base because the host nation (Qatar) wanted to reclaim the space then occupied by the squadron. Base officials decided to construct eight O&M-funded, semi- permanent facilities (that have a useful life of up to 25 years with maintenance and upkeep) to temporarily house the squadron at various locations on the base at a cost of about $650,000 each. During the same year, base officials also initiated a request for $24 million in MILCON funding through the Air Force to construct a permanent facility at the new location that would house both North Squadron personnel and personnel from other Air Force entities. The use of these two funding sources creates the potential for unnecessarily duplicative expenditures of up to $5.2 million, which is the total amount in O&M funding for the eight semi-permanent facilities that will no longer be needed to house the North Squadron once the permanent facility for the squadron and other Air Force entities is complete. According to Army Central Command officials, in 2009, bases in Kuwait needed additional dining facilities to support a surge in personnel. To satisfy this requirement, DOD entered into an O&M- funded food service contract, which included the contractor providing four dining facilities (with an estimated useful life of up to 25 years with maintenance and upkeep) in Kuwait for government lease. The contract included provisions providing that: the U.S. government cannot purchase or take ownership or title of the dining facilities, the U.S. government cannot pay all of the direct costs of building them, and the dining facilities remain the property of the contractor and are to be removed at the end of the period of performance. According to DOD figures, the department spent $43.8 million for leasing and operating these four dining facilities in Kuwait over the 5-year period of the contract. In 2015, upon the expiration of the old food service contract, Area Support Group Kuwait officials requested $64 million in O&M funding to solicit a new food services contract, which according to officials would have included $27 million to construct five dining facilities to replace the four scheduled to be removed as a result of the expiring contract. When the request came to the Army Central Command Engineer in Kuwait for review, officials expressed concern that the requested contract would be an inappropriate expenditure of O&M funds because MILCON appropriations must be used for construction when project costs exceed the $1 million O&M maximum. According to a senior Army Central Command official, if the Area Support Group Kuwait dining facilities in the requested 2015 food service contract were completed as a construction project, it would require the use of MILCON funds. As of January 2016, it was still unclear how the four existing dining facilities will be replaced and the new ones financed, but according to an Army Central Command official, the Army will have to expend additional funds in some form to duplicate the dining facilities, thereby providing the same service (dining facilities) a second time to the same beneficiaries (bases). If this were to be the case, the construction of the replacement dining facilities would create duplicative expenditures of up to $7.1 million, the appraised cost of the four contractor-owned dining facilities when new that will be removed after the current food service contract has expired. While senior officials in the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment stated that the existing additional construction authorities should provide an adequately expedited process to fund contingency construction projects, none of the base officials in the CENTCOM area of responsibility we interviewed agreed. Instead, the base officials we interviewed stated that it is the absence of an expedited process to fund contingency construction projects that is the reason they use the approaches we identified (i.e., modifying a project’s specifications and using multiple, smaller projects). Further, according to Army Central Command officials, the length of commanders’ deployments—typically lasting 1 year or less—adds urgency to complete projects quickly. As a result, commanders in the CENTCOM area of responsibility may have routinely opted to use O&M funds for contingency construction projects to the maximum extent possible in order to avoid the more lengthy review and approval processes that may be involved when using MILCON funding, a process that can take 2 or more years before construction begins. While the practice of maximizing the use of O&M funds for contingency construction may help base commanders in the CENTCOM area of responsibility meet urgent requirements, they acknowledge, as do officials at the Army Central Command, that the routine use of O&M funds in lieu of DOD’s other authorities has the potential to create risks regarding the appropriate use of funding and could lead to negative operational impacts and unnecessarily duplicative construction expenditures. GAO’s Standards for Internal Control in the Federal Government states that management should design and implement control activities—policies, procedures, techniques, and mechanisms that enforce management’s directives—to achieve objectives and respond to risks. In the case of contingency construction projects, these control activities could include policies and procedures that would allow base commanders to better support immediate contingency basing and operational needs—including for projects with construction costs greater than the $1 million O&M funding maximum that are not suited to the existing lengthy MILCON review and approval process. These control activities, for example, could include processes that improve the use of existing authorities while finding ways to shorten review and approval time frames or seeking additional authorities as appropriate. As noted earlier, DOD Directive 3000.10 assigns responsibility to the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics to, among other things, designate a senior official to be responsible for the oversight of all aspects of contingency basing policy. The guidance also assigns the Under Secretary responsibility to develop criteria for facilities, equipment, and services for contingency locations. According to a senior official in the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment, this office is working with the Joint Staff to develop new contingency basing construction policy and Unified Facilities Criteria for construction projects that support urgent operational requirements. However, according to an official from this office, these changes to policy and criteria will not include provisions to collect and analyze data on the extent to which O&M funding is used for construction projects as part of these efforts, limiting DOD’s ability to address the financial, operational, and duplication risks we have identified. Analyzing the extent to which O&M funding is being used for construction projects in the contingency environment may better enable DOD to determine the magnitude of DOD’s risk from using O&M funding for construction and identify opportunities to encourage the use of other authorities—including the use of O&M funds under the Contingency Construction Authority. The information may also enable DOD to determine whether existing departmental processes implementing those authorities sufficiently support urgent construction needs or could be expedited. Finally, it may enable DOD to determine whether additional authorities are needed. DOD has guidance that is used for determining the appropriate level of construction for MILCON-funded projects. The guidance includes DOD’s Unified Facilities Criteria, which states, among other things, that cost engineers must thoroughly understand a project’s scope of work before rendering a cost estimate. In addition, the guidance indicates that cost engineers should always remain mindful of the documentation necessary to support cost estimate submissions, such as project narratives that highlight any assumptions made during the preparation of the cost estimate and that describe the project requirements in sufficient detail to give a clear understanding of the scope of work. According to Army Corps of Engineers officials, the level of construction needed to meet a project requestor’s requirements is one of the underlying assumptions that should be documented. With respect to construction in the CENTCOM area of responsibility, CENTCOM Regulation 415-1 notes that service components plan and program for military construction. According to CENTCOM officials, this includes developing construction requirements, determining the appropriate level of construction to meet those requirements, and communicating that determination to the Army Corps of Engineers, which is DOD’s lead construction agent in the CENTCOM area of responsibility. Based on that information, the Army Corps of Engineers will then develop cost estimates for the construction project. CENTCOM’s regulation also indicates that at contingency locations, construction projects will be of austere design, constructed to the minimum military requirement to limit the demand on available infrastructure and resources. In this vein, CENTCOM’s regulation provides three levels of construction for contingency locations, which are generally keyed to a facility’s intended period of use. These three levels are: “initial,” for facilities intended for use for up to 6 months; “temporary,” for facilities intended for use for up to 5 years; and “semi-permanent,” for facilities intended for use for up to 10 years. Although DOD and CENTCOM have guidance used for determining the appropriate level of construction for MILCON-funded projects, Army Corps of Engineer officials were not always able to provide documentation that substantiated how the determination was made. Specifically, as of July 2015, the Army Corps of Engineers was unable to provide us with documentation regarding the service components’ rationale for the respective level-of-construction determinations for 11 of 39 MILCON-funded construction projects in its database that cost over $40 million each during fiscal years 2011 through 2015. All told, the 11 projects totaled about $669 million, or approximately 27 percent of the $2.4 billion programmed for all 39 projects. Furthermore, for 8 of the 11 projects for which there exists no record of level-of-construction determinations, Army Corps of Engineer officials could not tell us what level-of-construction the completed projects represented, including a $55 million theater vehicle maintenance compound at Kandahar Airfield, Afghanistan, constructed in 2009, and a $47 million special operations forces complex constructed at Mazar E Sharif, Afghanistan, in 2014. As the Army Corps of Engineers develops project designs and cost estimates, the level-of-construction determination constitutes a fundamental assumption because according to Army Corps of Engineers, it affects the resulting design and cost of a project. As discussed earlier, DOD guidance notes that cost engineers—including those from the Army Corps of Engineers—must thoroughly understand a project’s scope of work and other aspects of a project being estimated. It further indicates that the cost engineer should always remain mindful of the documentation necessary to support cost estimate submission requirements for each phase. For certain estimates, the guidance describes use of a project narrative, which includes assumptions made during the preparation of the estimate and describes project requirements that must be performed in sufficient detail to give a clear understanding of the scope of work. According to Corps officials, the level of construction needed to meet the service components’ respective requirements is among these underlying assumptions that the Army Corps of Engineers should sufficiently detail. Although Corps officials stated that the level-of-construction determination from the service components should be included in the documentation supporting the cost estimate prepared by the Army Corps of Engineers, they noted that there are other means to communicate level-of- construction determinations, to include design directives, general construction guidance, or verbal communications from project stakeholders. Furthermore, Corps officials noted that, for some projects, it sends multi-discipline teams of engineering and construction experts to work with customers to review and refine facility construction proposals, plans, and cost estimates before the final approval and submission of budget requests. In none of the 11 projects outlined above, however, were Corps officials able to provide evidence that these other means were used because the available documentation is silent on level-of- construction determinations and Corps officials were unable to provide evidence that they and the project requestors had communicated about levels of construction before the Army Corps of Engineers began designing the 11 projects. Due to the absence of documentation, it is unclear whether level-of- construction determinations occurred and were communicated prior to the projects’ design and cost estimation. According to GAO’s Standards for Internal Control in the Federal Government, management should use quality information to achieve the entity’s objective. In the case of contingency construction in the CENTCOM area of responsibility, DOD’s objective could mean building to meet the minimum military requirement. GAO’s standards also state that management should design appropriate control activities, which may include clearly documenting all transactions and other significant events in a manner that allows the documentation to be readily available for examination and ensuring a clear segregation of incompatible duties. Because DOD does not have a control mechanism to ensure that the Army Corps of Engineers maintains a documented record of level-of-construction determinations and communicates with the service component commands about those determinations before designing and estimating the cost of contingency construction projects, DOD risks constructing facilities that exceed minimum military requirements and expending more resources than required in a resource- constrained environment. DOD has not developed a formal process for reevaluating ongoing contingency construction projects when missions change, but has undertaken ad hoc reviews of planned and ongoing projects. Under DOD guidance, combatant commanders are responsible for assessing the operational environment at critical milestones in order to determine contingency basing requirements within their respective areas of responsibility. According to CENTCOM and Joint Staff officials, however, DOD has not established a recurring formal process at their respective levels for reevaluating planned or ongoing construction projects based on mission changes. In a 2014 committee report, the Senate Committee on Appropriations expressed concern over the status of unfinished military construction projects in Afghanistan and DOD’s plans for the divestment of these and other military construction facilities that will no longer be required to support U.S. military operations there. According to CENTCOM, Joint Staff, and Army Corps of Engineers officials, in general, DOD is aware of the need to be a careful steward of resources, including those devoted to construction projects in contingency environments, especially following major changes in mission requirements. To this end various DOD entities have reviewed construction projects on an ad hoc basis when such changes have occurred. For example, an examination of the limited documentation available corroborates that beginning in November 2011, U.S. Forces- Afghanistan undertook five separate reviews of planned and ongoing construction projects in Afghanistan to determine whether to de-scope, cancel, or continue the construction projects in anticipation of the transition of operational responsibility to the Government of the Islamic Republic of Afghanistan, coalition force reductions, and other changes to mission requirements. The documentation indicates that on the basis of the first four reviews the U.S. Forces-Afghanistan reduced or cancelled 123 construction projects totaling approximately $1 billion in programmed funding. For example, during a review conducted in March 2013, the U.S. Forces-Afghanistan cancelled a $7 million project for an Army aviation headquarters facility at Bagram Airbase. According to an Army Corps of Engineers official who was involved in project management in Iraq from 2007 through 2010, similar reviews, reductions, or cancellations of planned or ongoing projects were also conducted there. For example, the Army Corps of Engineers official described participation in a September 2008 assistance team that visited Iraq to work out project details for 30 planned projects. Subsequently, however, Army Central Command officials stopped the design process for these projects and withdrew funding because the mission upon which the original projects were based had concluded. Service supporting documentation for these reviews was not available, and we could not determine the extent to which construction project reviews have been conducted in Iraq and Afghanistan, the cost savings accrued as a result of these reviews, and the rationale behind the decisions. According to CENTCOM officials, the entities that conducted the ad hoc reviews cited above were not required to systematically report the results of their reviews and hence no such documentation is filed with the Joint Staff, CENTCOM, or the military services. Moreover, CENTCOM officials point out that cost savings realized as a result of construction projects being cancelled or reduced in scope does not capture the full magnitude of their review efforts. For example, these officials pointed out that in some cases construction projects that were no longer needed because of changed mission requirements were not cancelled because doing so would have cost as much if not more than completing the project. In other cases, projects were reviewed and decisions were made to continue construction because, despite changed mission requirements, it was still determined that there was a need for the facility. Nonetheless, while the ad hoc reviews cited above resulted in positive outcomes in terms of cost savings or cost avoidance, absent a specific policy or guidance requiring a fully documented, formal process for review of construction projects when missions change, DOD risks not consistently and routinely evaluating whether to continue, reduce in scope, or discontinue the construction of facilities in support of future contingencies as missions change. For example, with fully documented reviews, DOD would retain and could benefit from information regarding prior decisions, gain efficiency by using an established review process, and ensure that all construction projects defined by the review process are consistently and routinely evaluated. Further, absent a specific policy or guidance requiring a fully documented, formal process for the review of construction projects when missions change, DOD officials may not have the information they need to manage contingency construction operations by assessing the operational environment at critical milestones in order to determine contingency basing requirements within their respective area of responsibility. DOD has taken steps to rectify some of the concerns highlighted above. According to DOD Comptroller officials, in September 2015, the Under Secretary of Defense (Comptroller) updated the DOD Financial Management Regulation in response to a May 2015 Special Inspector General for Afghanistan Reconstruction report on an unused command and control facility in Afghanistan, to require additional training and establish policy that would improve the stewardship over resources, including those used for contingency construction projects for which the underlying mission changes. As revised, the regulation requires the heads of DOD components to include course materials in Antideficiency Act training that clearly state that taxpayer funds should not be spent when a requirement is no longer needed. Additionally, under the updated regulation, DOD commanders, supervisors, and managers must provide fiscal law training to educate DOD personnel with regard to their fiduciary and legal responsibilities to prevent the wasteful spending of appropriated funds. The regulation also provides that key fund-control personnel must review and verify on a continuous basis that goods and services are still needed, and must not spend taxpayer funds when goods and services are no longer needed. While these new requirements could improve contingency basing determinations, they provide broad guidance covering goods and services generally and are not focused on contingency basing and construction. Therefore, implementing guidance specific to contingency basing and construction would help clarify expectations and establish a review process. Based on our analysis of the U.S. Forces- Afghanistan’s documentation regarding its reviews of planned and ongoing construction projects, this implementing guidance could include mechanisms for establishing (1) the frequency of construction project reviews or what event or impetus might trigger a review; (2) the criteria that should be used to select construction projects for a review; and (3) the documentation required for the construction projects selected for review, including the process and rationale for each decision to cancel, de-scope, or continue a project. Without such implementing guidance, the department risks continuing or completing military construction projects that are no longer needed to support U.S. military operations. DOD has established an approach for recording and sharing lessons learned through its Joint Lessons Learned Information System, but CENTCOM and its components have not used this system for contingency construction projects in Iraq and Afghanistan. In 2000, DOD developed and implemented its Joint Lessons Learned Information System, which is its system of record for recording and sharing lessons learned in DOD’s Joint Lessons Learned Program, including those identified during operations. The Joint Lessons Learned Program process consists of five phases—discovery, validation, resolution, evaluation, and dissemination—through which observations are identified, assessed, and as appropriate, shared through lessons learned. New observations can be derived from experiences occurring during contingency operations, including lessons related to construction. However, as of September 2015, the Joint Lessons Learned Information System had no lessons learned recorded for contingency construction. The system did contain 14 contingency construction-related notes or comments, but these were from the perspective of individuals who had experienced them first-hand and had not been validated by the department. While it is unclear whether lessons were identified and learned but not recorded in the system, the absence of validated lessons learned recorded in the Joint Lessons Learned Information System for this area indicates that this could potentially be the case. In March 2015, we reported that the Joint Lessons Learned Information System is also not being fully utilized for another key area—operational contract support. Specifically, we reported that DOD was generally not sharing operational contract support lessons learned in the Joint Lessons Learned Information System because the system is not functional for users searching operational contract support issues due to, among other reasons, not having a label for this area and not having a designated location, or “community of practice,” in the system for sharing relevant lessons learned. We recommended in that report that DOD implement a label and designate a single community of practice for operational contract support in the Joint Lessons Learned Information System. DOD concurred and established a community of practice for operational contract support in November 2015. Although DOD has developed and made available its Joint Lessons Learned Information System, deployed U.S. forces in the CENTCOM area of responsibility rely on mechanisms outside of the joint system for sharing lessons learned related to contingency construction projects in support of operations in Iraq and Afghanistan. Specifically, according to Army Central Command and Air Force Central Command officials, deployed U.S. forces rely on unit rotation overlap, experienced personnel outside of the contingency area, expert organizations, and contingency- related DOD boards to share up-to-date lessons important to contingency construction in the CENTCOM area of responsibility. Unit rotation overlap. When one military unit arrives at its deployed location to replace another, the outgoing unit remains at the deployed location for a period overlapping the incoming unit’s arrival. During this overlapping period, the outgoing unit shares the latest information and relevant lessons learned with the incoming unit. In the case of construction-related units, they can provide construction-related lessons learned specific to the contingency location or more broadly applicable to contingency construction in general. Experienced personnel outside of the contingency area. Deployed U.S. forces undertaking contingency construction projects interact with DOD personnel outside of contingency areas—for example, at the Army Central Command and the Air Force Central Command Headquarters—with years of construction experience, including with projects undertaken in support of contingency operations. These experienced personnel are available to answer questions, relay experiences, provide perspectives, and share important lessons learned related to contingency construction. Expert organizations. Deployed U.S. forces also have access to specialized DOD organizations with construction project expertise, including those in support of contingency operations, such as the Army Corps of Engineers. These organizations advise and guide deployed U.S. forces on the design and construction of contingency- related projects, sharing important lessons learned in the process. Contingency-related DOD boards. Proposed contingency-related projects in the CENTCOM area of responsibility may be subject to review and approval by multi-discipline boards in theater, such as the Joint Facilities Utilization Board. In the process of reviewing and approving contingency construction projects, board members raise questions based on their experience and share important lessons learned from reviewing other construction projects in support of contingency operations. Although deployed U.S. forces may rely on these mechanisms to share contingency construction lessons learned, it can be an ad hoc or incomplete approach. By contrast, as described by Chairman of the Joint Chiefs of Staff Manual 3150.25A, the Joint Lessons Learned Program provides both a vehicle for facilitating awareness of observations, issues, best practices, and lessons learned across DOD and a forum for institutionalizing lessons learned across the joint force. The guidance notes that recording, analyzing, and developing improved processes, procedures, and methods based on lessons learned are primary tools in developing improvements in joint force readiness, capabilities, and overall performance. In addition, Chairman of the Joint Chiefs of Staff Instruction 3150.25F notes that program stakeholders—including the Joint Staff, the services, the combatant commands, and combat support agencies— when appropriate, will contribute information, data, and lessons learned that are germane to improving joint capabilities and readiness. The guidance further indicates that combatant commands will provide and maintain Joint Lessons Learned Program support for theater- and function-specific joint and interoperability lessons learned activities. It notes that lessons are derived from the full range of joint activities and operations, which could include construction during contingency operations. However, CENTCOM guidance does not reinforce the DOD guidance regarding the Joint Lessons Learned Program. Specifically, the CENTCOM regulation governing construction, including contingency construction, does not discuss lessons learned or establish who within the command and its service component commands should be responsible for recording and sharing construction-related lessons learned in the CENTCOM area of responsibility through the Joint Lessons Learned Program. Further, the regulation does not contain the terms “lesson” or “learned” in combination or separately, illustrating that recording and sharing lessons learned is not a focal point of the guidance and may not carry adequate leadership emphasis on the importance of recording and sharing lessons learned. Another factor affecting the recording and sharing of lessons learned is leadership emphasis. According to DOD’s Joint Lessons Learned Program officials, increasing the recording and sharing of lessons learned in the DOD Joint Lesson Learned Information System can be improved with leadership emphasis at a combatant command. For example, according to these officials, in fiscal year 2015, leadership emphasis at another combatant command (Special Operations Command) on collecting lessons learned generally resulted in an over tenfold increase in the number of recorded lessons compared with those that CENTCOM recorded during the same fiscal year. According to Joint Lesson Learned Program officials, improved recording of contingency construction lessons learned could result if CENTCOM leadership increased its emphasis on the importance of discovering, validating, and disseminating relevant contingency- construction-related observations. In the absence of specific CENTCOM guidance and leadership emphasis to record and share contingency construction lessons learned in DOD’s Joint Lessons Learned Information System, CENTCOM and its service component commands are likely to continue to rely on mechanisms outside this system to share lessons learned related to construction projects in support of contingency operations in Iraq and Afghanistan. As a result, commanders may repeat errors in the planning and design of contingency construction projects that CENTCOM and service component commands have identified. For example, an important potential lesson relating to the CENTCOM area of responsibility occurred in fiscal year 2011 when concrete housing units were constructed at Bagram Air Base, Afghanistan, that later developed toxic mold due to poor engineering and construction shortcuts. Specifically, the heating, ventilation, and air conditioning system did not provide adequate ventilation and the concrete was not properly sealed, which in combination created an environment where the toxic mold could form and accumulate. As a result, personnel were evacuated until the housing units could be remediated, denying critically needed hardened shelters to help protect service members at Bagram, Afghanistan, from indirect fire attacks. According to an Army Central Command engineering official, this experience may contain a lesson for construction project managers in the CENTCOM area of responsibility regarding the need to involve adequate engineering expertise regarding the health and safety aspects of a project’s design. Officials at Al Udeid Air Base identified another important potential lesson, which was related to ammunition storage facilities. Specifically, after construction of aboveground munitions storage facilities at the air base, officials determined that the facilities’ lightning protection system was not adequate, putting high-dollar munitions stored in the facilities at risk of damage or destruction and creating a safety risk. For example, the officials stated that during lightning storms all personnel have to evacuate due to the lighting-strike risk and operations halt as a result. According to Al Udeid Air Base officials, they learned from this experience that a more robust lightning mitigation system was needed to provide adequate protection for facilities of this type. While those persons involved in these examples can share their observations as long as they continue working at CENTCOM, because the experiences were not recorded and shared in DOD’s system of record—the Joint Lessons Learned Information System—there is a risk that different people at other locations, or during other contingencies, could repeat these or similar errors. DOD has spent billions of dollars on construction in support of contingency operations since 2001, but has some weaknesses in the management and oversight of the contingency construction program. While DOD has taken some steps to improve its management of construction projects, such as conducting ad hoc reviews of projects in Iraq and Afghanistan to identify potential reductions or cancellations, DOD faces challenges developing full oversight of contingency construction. Actions to improve the quality of information and documentation of O&M-funded contingency construction projects could help DOD oversee funds for construction and improve awareness of how much funding the department uses for construction projects. Additionally, the urgency of contingency construction requirements coupled with the absence of a review and approval process to support quickly funding contingency construction projects needed in fewer than 2 years that are expected to cost more than $1 million may result in DOD’s continued use of questionable approaches when constructing facilities—potentially leading to unintended results. Moreover, until DOD improves control mechanisms for documenting and communicating level-of-construction determinations, DOD risks constructing facilities that exceed minimum military requirements, expending more resources than required in a resource-constrained environment. Additionally, absent a requirement for a formal process to reevaluate contingency construction projects when missions change, DOD risks constructing facilities that may not be essential to support existing missions or may not be sufficient for revised missions in the CENTCOM area of responsibility and in future contingencies worldwide. Lastly, without specific guidance and leadership emphasis to record and share contingency construction lessons learned in DOD’s Joint Lessons Learned Information System, CENTCOM and its service component commands may repeat errors in the planning and design of contingency construction projects in future contingencies. We are making the following five recommendations to improve DOD’s management and oversight of contingency construction in the CENTCOM area of responsibility and in other geographic combatant commands where applicable: To improve DOD’s awareness of how much O&M funding the department uses for construction projects to support contingency operations, we recommend that the Secretary of Defense direct the Secretaries of the military departments, in coordination with the Under Secretary of Defense (Comptroller), to track the universe and cost of ongoing and future contingency construction projects that are funded from O&M appropriations under section 2805 of Title 10, U.S. Code (unspecified minor military construction authority). To improve DOD’s ability to quickly fund contingency construction projects that are not ideally suited to the current standard MILCON and O&M processes and time frames and reduce reliance on funding approaches that pose risks regarding the appropriate use of funding, negative operational impacts, and unnecessary duplication, we recommend that DOD evaluate and improve the use of existing processes and authorities to the extent possible; determine whether additional authorities are needed to support urgent construction needs; and revise existing departmental processes or seek additional authorities, as appropriate. To help ensure that DOD limits demands on available resources to those necessary to meet contingency construction project requirements and communicates those requirements effectively, we recommend that the Secretary of Defense, in coordination with the Secretary of the Army, direct the Army Corps of Engineers to develop a control activity for documenting level-of-construction determinations before the Army Corps of Engineers designs the projects and estimates their costs. To ensure that DOD avoids constructing facilities that may be unneeded to support U.S. forces and to comprehensively document the results of its reviews of ongoing construction projects when changes in mission requirements occur, we recommend that the Secretary of Defense, in coordination with the Chairman of the Joint Chiefs of Staff, direct the Secretaries of the military departments and the Commander of CENTCOM to develop implementing guidance for the review and verification of ongoing contingency construction projects when mission changes occur. To improve the awareness of the combatant and service component commands’ responsibilities to record and share lessons learned and to ensure that important contingency-construction-related lessons are recorded, we recommend that the Secretary of Defense, in coordination with the Chairman of the Joint Chiefs of Staff, direct the Commander of CENTCOM to revise Central Command Regulation 415-1 or issue other guidance as appropriate to specifically detail the role of the combatant command and service component commands in recording contingency construction lessons learned from the CENTCOM area of responsibility in the Joint Lessons Learned Information System. Additionally, in light of potential concerns regarding the appropriate use of funding raised by several of the examples identified in this report, we recommend that the Secretary of Defense direct the Secretaries of the Army and the Air Force to review these and, as appropriate, other construction projects in the contingency environment presenting similar circumstances to ensure that funds were properly used. We provided a draft of this report to DOD for review and comment. In its written comments, DOD concurred with one of our recommendations, partially concurred with three recommendations, and non-concurred with the remaining two recommendations. DOD’s comments are summarized below and reprinted in their entirety in appendix III. DOD did not concur with our recommendation that the department track the universe and cost of ongoing and future contingency construction projects that are funded from O&M appropriations under section 2805 of Title 10, U.S. Code (unspecified minor military construction authority), stating that it does not have data systems that can track these projects, it would not be cost effective to develop and implement such a system, and tracking the universe and cost of ongoing and future contingency construction projects would not improve its decision making. Further, DOD stated that expanding section 2805 oversight and tracking responsibilities beyond its current practices would limit the benefit of that authority and that it is unaware of any systemic abuses of the section 2805 authority that would warrant collecting these data. With regard to DOD’s statement that the department does not have a data system that can track these projects and that it would not be cost effective to develop and implement such a system, we are not suggesting that DOD develop and implement a new system, but instead that DOD adapt an existing system or mechanism for recording and capturing these data in an automated form. For example, the Army’s existing Element of Resource code is a four-digit code that the Army uses to record and classify funds transactions and the nature of the funds’ use in its accounting and finance system. The Army could also use this mechanism to create a specific code to track contingency construction projects that are funded using O&M appropriations under section 2805. In this way data on the universe and cost of contingency construction projects would be readily available in the Army’s existing accounting and finance system. In addition, we disagree with DOD’s statement that tracking the universe and cost of ongoing and future contingency construction projects would not improve the department’s decision making given that DOD was not aware of the magnitude of its use of O&M funds for construction projects under section 2805. As noted in our report, we found that these projects constituted a substantial segment of overall contingency construction, and that, according to Army Central Command officials, it is likely that the majority of contingency construction projects are funded under this authority. Therefore, we continue to believe that knowing the universe and cost of all O&M-funded construction projects supporting contingency operations is important for decision making, particularly as that knowledge would improve decision makers’ administration and oversight of O&M funds, as well as aid in determining and projecting the funding needed to support ongoing and future contingency operations. Finally, the primary purpose of our recommendation for tracking construction funded from O&M appropriations under section 2805 is not to identify abuses of that authority, but rather to understand to what extent DOD uses O&M funds for construction during contingency operations. That information could assist the department in planning for current and future contingency operations, by determining the portion of O&M spent on construction activities that is therefore unavailable for other purposes. As we reported, that portion may be substantial. This information could also assist the department in evaluating the necessary actions to implement our second recommendation. Finally, during our review we found several instances where commanders had developed multiple construction projects, each below the O&M maximum for unspecified minor military construction under section 2805, to meet what may have been an overarching construction requirement. We noted that these instances have the potential to raise concerns regarding the appropriate use of funding. Although not the primary purpose of our recommendation, to the extent that reliance upon O&M-funding in the contingency environment increases this risk, tracking the universe and cost of O&M-funded construction projects in the contingency environment may aid the department in identifying circumstances posing an increased risk. DOD partially concurred with our recommendation that the department evaluate and improve the use of existing processes and authorities to the extent possible; determine whether additional authorities are needed to support urgent construction needs; and revise existing departmental processes or seek additional authorities, as appropriate. In its comments, DOD stated that it already conducts periodic reviews of the available military construction authorities to determine if changes are needed to improve or enhance speed and flexibility in providing urgent or emerging facility requirements. However, during our review, several officials we interviewed who were responsible for making construction decisions at contingency bases confirmed that the current process for funding contingency construction projects is not sufficient to provide for the needed speed and flexibility. Therefore, we continue to believe that DOD should evaluate its use of existing processes and authorities. To the extent that DOD uses the processes that it described in its response to our recommendation to address the issues we raised, DOD’s actions will meet the intent of our recommendation. DOD partially concurred with our recommendation that the Army Corps of Engineers develop a control activity for documenting level-of-construction determinations before designing projects and estimating their costs, stating that the appropriate level of construction is determined by the facility user rather than the construction agent. DOD also noted that the department has other construction agents in addition to the Army Corps of Engineers. We are not recommending that the construction agent determine the level of construction for a facility, but rather that the construction agent develop a control activity for documenting the level-of- construction determination obtained from the facility user. During this engagement, we reviewed projects managed by the Army Corps of Engineers in the CENTCOM area of responsibility and therefore made specific reference to the Army Corps of Engineers in our recommendation. However, should the department determine that another construction agent, such as the Naval Facilities Engineering Command or the Air Force Civil Engineer Center, is in need of a similar control activity, the department should apply the recommendation accordingly. DOD partially concurred with our recommendation that the military departments and the Commander of CENTCOM, develop implementing guidance for the review and verification of ongoing contingency construction projects when mission changes occur, stating that the department believes all combatant commanders involved in contingency operations should conduct periodic reviews of new or ongoing construction projects to ensure they still meet operational needs. Because our review was focused on CENTCOM, we cited that combatant command in our recommendation. However, we agree that all combatant commanders involved in contingency operations should conduct periodic reviews of new or ongoing construction projects to ensure they still meet operational needs. Therefore, DOD would meet the intent of the recommendation by expanding its planned action to ensure that it applies to all combatant commands, not only to CENTCOM. DOD concurred with our recommendation that the Commander of CENTCOM, revise Central Command Regulation 415-1 or issue other guidance as appropriate to specifically detail the role of the combatant command and service component commands in recording contingency construction lessons learned from the CENTCOM area of responsibility in the Joint Lessons Learned Information System. Finally, DOD did not concur with our recommendation that the Secretaries of the Army and the Air Force review the examples presented in our report and, as appropriate, other construction projects in the contingency environment presenting similar circumstances, to ensure that funds were properly used, in light of potential concerns raised by these examples regarding the appropriate use of funding. The department stated that the recommendation is redundant of current practice and referenced department processes to conduct periodic reviews to ensure compliance, among other processes, guidance, and training. Our recommendation is not that DOD create new processes but instead that DOD use the periodic review processes it referenced to evaluate the examples in our report and ensure that funds were appropriately used. These examples present instances where the department had developed multiple construction projects, each below the O&M maximum for unspecified minor military construction, to meet what may have been an overarching construction requirement. We noted a similar instance where the department had used its review process and found that an Antidefiency Act violation had occurred. In light of the concerns raised by the examples in our report, we continue to believe that DOD should use its existing processes to review the facts and circumstances presented by these examples and determine whether funds were appropriately used. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, and the Secretaries of the military departments. The report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-5431 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine the extent to which the Department of Defense (DOD) has tracked the universe and cost of all contingency construction projects in the U.S. Central Command (CENTCOM) area of responsibility that support operations in Iraq and Afghanistan separately from all other construction projects undertaken by DOD, we reviewed and analyzed available DOD contingency construction project data from fiscal year 2001 through fiscal year 2016 maintained by the Office of Under Secretary of Defense (Comptroller), the Army, the Air Force, and the Army Corps of Engineers to determine the extent to which DOD identifies and records construction projects undertaken in support of contingency operations in Iraq and Afghanistan. We reviewed these data based on suggestions from DOD officials in responding to our request for sources that would contain the universe and cost of contingency construction projects. Specifically, we reviewed project data from the: Office of the Under Secretary of Defense Comptroller’s Program Resources Collection Process database; Office of the Under Secretary of Defense Comptroller’s military construction C1 budget exhibits; Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics’ Secretary of Defense’s Real Property Asset Database; Army’s General Fund Enterprise Business Systems database; Army Corps of Engineers’ Program and Project Management System Air Force’s General Accounting and Finance System. We determined that these sources did not contain data for (1) all Military Construction (MILCON)-funded projects undertaken in support of contingency construction for fiscal years 2001-16 or (2) projects funded using Operation and Maintenance (O&M) funds under section 2805 of Title 10, U.S. Code, (unspecified minor military construction authority). Therefore, we concluded that they were not sufficiently reliable for the purposes of identifying the universe of contingency construction projects. To determine whether projects funded using O&M appropriations under section 2805 of Title 10, U.S. Code, represented a substantial segment of contingency construction, we reviewed readily available data on construction projects that consisted of those reviewed by the U.S. Forces- Afghanistan’s Joint Facilities Utilization Board for fiscal years 2009-12. We determined the data from U.S. Forces-Afghanistan to be sufficiently reliable for the purposes of this report by interviewing knowledgeable agency officials, tracing a selection to source documents, and manually testing data for outliers and obvious errors. We reviewed Office of Management and Budget guidance that is used by the department when deciding whether funding—including for construction—properly belongs in either the base or overseas contingency operations portion of the budget. We also reviewed DOD Directive 3000.10, Contingency Basing Outside the United States, and CENTCOM Regulation 415-1 to understand contingency basing responsibilities. Further, we compared existing DOD and CENTCOM contingency construction project review and approval processes and the availability of DOD information on contingency construction projects funded with O&M appropriations with GAO’s Standards for Internal Control in the Federal Government, which state among other things that management should use quality information to achieve the entity’s objectives and design control activities to achieve objectives and respond to risks by, for example, clearly documenting all transactions and other significant events in a manner that allows the documentation to be readily available for examination. We also analyzed and discussed the use of available statutory authorities for funding contingency construction projects and the potential risks to individual projects with officials at service component commands and bases in the CENTCOM area of responsibility to understand mechanisms commanders used to manage projects that relied on O&M funding for contingency construction. The projects discussed included (1) those we identified in reviewing U.S. Forces-Afghanistan data on construction projects for fiscal years 2009-12 that contained similar or identical dollar amounts, dates, and project narratives and (2) those identified by base officials, during site visits, that illustrated the potential risks of relying on O&M funding for contingency construction projects. We discussed the advantages and disadvantages associated with available alternatives for funding contingency construction projects. We also reviewed DOD Directive 4270.5 and DOD Directive 3000.10 to understand the roles and responsibilities of various DOD entities involved in the management, execution, and oversight of contingency construction in the CENTCOM area of responsibility. We interviewed senior officials from the Under Secretary of Defense (Comptroller), CENTCOM, the Army Central Command, the Air Force Central Command, the U.S. Forces-Afghanistan, the Army Corps of Engineers, and the Air Force Civil Engineer Center and conducted site visits at Camp Arifjan, Kuwait; Camp Buering, Kuwait; Ali Al Salem Air Base, Kuwait; Al Udeid Air Base, Qatar; Camp As Sayliyah, Qatar; and Al Dhafra Air Base, United Arab Emirates in the CENTCOM area of responsibility. We selected bases for site visits that (1) had the highest number of MILCON projects at the base, (2) had projects in close proximity to bases with the highest number of MILCON projects and reachable without extensive additional travel, and (3) were identified by DOD officials as containing projects illustrating contingency construction using O&M appropriations. We excluded Iraq and Afghanistan due to the closure of our audit offices there and the difficulties and risks associated with travel in these countries. To determine the extent to which DOD has developed a process for determining the appropriate level of construction for MILCON-funded contingency construction projects, we focused on processes that apply to contingency construction projects in the CENTCOM area of responsibility and compared CENTCOM Regulation 415-1 and DOD’s Unified Facilities Criteria 3-740-05 with GAO’s Standards for Internal Control in the Federal Government, which states among other things that management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. In addition, we reviewed data available as of February 2015 from the Army Corps of Engineers Program and Project Management System database for MILCON-funded contingency projects in the CENTCOM area of responsibility in fiscal years 2004-15. Out of these data we analyzed all projects with programmed amounts equal to or over $40 million, accounting for the top one third of programmed amounts for projects, to determine the extent to which DOD had documented level-of-construction determinations for the projects with the highest programmed amounts. The results of this analysis are not generalizable to projects with programmed amounts below $40 million. We determined the data to be sufficiently reliable for the purposes of this report by reviewing related documentation, interviewing knowledgeable agency officials, and reviewing related internal controls. To determine the extent to which DOD has developed a process for reevaluating ongoing contingency construction projects when missions change, we collected and reviewed supporting documentation for reviews that the U.S. Forces-Afghanistan conducted beginning in November 2011 of planned or ongoing contingency construction projects in Afghanistan— including CENTCOM data on construction project reevaluation reviews for fiscal years 2011-15. We compared this documentation with DOD Directive 3000.10, which states that the combatant commanders are responsible for assessing the operational environment at critical milestones to determine contingency basing requirements within their respective area of responsibility. We also interviewed officials from the Joint Staff, CENTCOM, the Army Central Command, and the Army Corps of Engineers regarding their roles in construction project reviews when mission changes occur in Iraq and Afghanistan. We discussed the May 2015 Special Inspector General for Afghanistan Reconstruction report on an unused command and control facility in Afghanistan with the staff who had conducted the underlying work. Further, during site visits to the CENTCOM area of responsibility, we interviewed base officials regarding the impact of mission requirement changes on planned or ongoing construction projects. To determine the extent to which DOD has established an approach for sharing lessons learned from contingency construction projects in support of contingency operations in Iraq and Afghanistan, we reviewed relevant guidance, including Chairman of the Joint Chiefs of Staff Instruction 3150.25F, which specifies that Joint Lessons Learned Program stakeholders, when appropriate, will contribute information, data, and lessons learned that are germane to improving joint capabilities and readiness, to determine what processes the department has in place to develop contingency construction lessons learned. Additionally, we reviewed all 14 observations recorded in the Joint Lessons Learned Information System for the CENTCOM area of responsibility. We also interviewed DOD officials regarding the mechanisms they used for communicating contingency construction lessons learned. We visited or contacted officials from the following organizations during our review: Joint Staff J-4 (Logistics) Directorate, Washington, D.C. Joint Staff J-5 (Strategic Plans and Policy) Directorate, Washington, D.C. Joint Staff J-7 (Joint Force Development) Directorate, Washington, D.C. Office of the Under Secretary of Defense (Comptroller) Office of the Under Secretary of Defense for Acquisition, Technology Office of the Assistant Secretary of Defense for Energy, Installations, and Environment; Washington, D.C. Office of the Director, Defense Procurement and Acquisition Policy, Washington, D.C. U.S. Central Command, Tampa, Florida U.S. Army Central Command, Shaw Air Force Base, South Carolina U.S. Army Central Command; Engineers, Facilities, and Construction; Camp Arifjan, Kuwait Area Support Group-Qatar, Camp As Sayliyah, Qatar Area Support Group-Kuwait, Camp Buehring, Kuwait U.S. Air Force Central Command, Shaw Air Force Base, South 380th Air Expeditionary Wing, Al Dhafra Air Base, United Arab 379th Expeditionary Civil Engineer Squadron, Al Udeid Air Base, 386th Air Expeditionary Wing, Ali Al Salem Air Base, Kuwait U.S. Army Corps of Engineers, Transatlantic Division, Winchester, U.S. Air Force Civil Engineer Center, Joint Base San Antonio- U.S. Forces-Afghanistan, Kabul, Afghanistan We conducted this performance audit from November 2014 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides additional detail on the statutory authorities available to the Department of Defense (DOD) for carrying out military construction projects. DOD operates under these statutory authorities to fund military construction projects through either the Military Construction (MILCON) or Operation and Maintenance (O&M) appropriations. DOD may use general statutory authorities for construction projects. Specifically, The Secretary of Defense and the Secretaries of the military departments may carry out military construction projects that are authorized by law. Specified military construction projects are listed in the annual National Defense Authorization Act and the explanatory statement accompanying the annual appropriations act. These projects are funded through the MILCON appropriation. Section 2805 of Title 10, U.S. Code, authorizes the Secretaries of the military departments to carry out unspecified minor military construction projects not specifically authorized by law, using MILCON or O&M funds. As of 2015, unspecified minor military construction projects must have an approved cost equal to or less than $3 million, or $4 million if intended solely to correct a life-, health- , or safety-threatening deficiency. From January 2008 until December 2014, the maximums were $2 million and $3 million, respectively, and $1.5 million and $3 million prior to January 2008. DOD may use O&M funds to carry out projects costing $1 million or less, and must use MILCON funds above that level. The O&M maximum was $750,000 prior to fiscal year 2015. In the case of projects above the O&M maximum, the military department Secretary must approve the project in advance; submit a notification to the appropriate congressional committees; and wait 21 days, or 14 days if the notification is submitted electronically. In addition to the general construction authorities, there are several other statutory authorities that DOD may use for construction projects in emergency and contingency circumstances. Specifically, Section 2803 of Title 10, U.S. Code, authorizes the Secretaries of the military departments to carry out emergency construction projects not otherwise authorized by law. The Secretary must determine that the project is vital to national security or the protection of health, safety, or the quality of the environment, and so urgent that it cannot be delayed until the next authorization act. The Secretary must submit a justification to the appropriate congressional committees and wait 7 days before carrying out the project. Projects using this authority must be carried out using unobligated military construction funds, up to a maximum of $50 million in any fiscal year. Section 2804 of Title 10, U.S. Code, authorizes the Secretary of Defense to carry out contingency construction projects not otherwise authorized by law or to authorize a military department Secretary to do so, if the Secretary determines that delay until the next authorization act would be inconsistent with national security or national interest. The Secretary of Defense must submit a justification to the appropriate congressional committees and wait 14 days, or 7 days if notification is provided electronically. DOD guidance notes that the Secretary of Defense has retained this authority and that the Under Secretary of Defense for Acquisition, Technology, and Logistics is responsible for coordinating requests for its use. Combatant commanders are to verify the need for project requests and forward them through the Chairman of the Joint Chiefs of Staff, who is responsible for assigning priority among competing requests and forwarding them to the Under Secretary of Defense for Acquisition, Technology, and Logistics. The military departments are also responsible for forwarding requests through the Under Secretary of Defense for Acquisition, Technology, and Logistics along with specified information. Projects must be carried out using amounts specifically appropriated for this authority. However, in recent years, there have been no specific appropriations for contingency construction under section 2804. Section 2808 of Title 10, U.S. Code, authorizes the Secretary of Defense to undertake and to authorize the military department Secretaries to undertake military construction projects not otherwise authorized by law that are necessary to support the armed forces in the event of a declaration of war or national emergency. DOD must notify congressional committees when using this authority. Similar to use of the authority under section 2804, DOD guidance provides that combatant commanders and the Chairman of the Joint Chiefs of Staff are to assign priority among competing requests and forward them to the Under Secretary of Defense for Acquisition, Technology, and Logistics. The Secretaries of the military departments also forward requests along with specified information through the Under Secretary of Defense for Acquisition, Technology, and Logistics to the Secretary of Defense, who has retained authority for use of the provision. Finally, since November 2003, legislation has authorized DOD to use O&M funds to carry out construction projects in specified areas outside the United States, including in the U.S. Central Command (CENTCOM) area of responsibility, that meet certain conditions. DOD refers to the authority, which is annually authorized and updated, as the Contingency Construction Authority. The construction must be necessary to meet urgent military operational requirements of a temporary nature in support of a declaration of war, a declaration of a national emergency, or a contingency operation. With the exception of Afghanistan, the construction must not be at a military installation where the United States is reasonably expected to have a long-term presence. Finally, the level of construction must be the minimum necessary to meet temporary operational requirements, and the United States must have no intention of using the construction after operational requirements have been satisfied. DOD must provide a notice with specified information to congressional committees before using funds for a project in excess of the general O&M construction maximum (currently $1 million) and wait for 10 days or 7 days, depending on the form of the notice, before carrying out the project. The legislation also previously required DOD to submit a quarterly report on the use of the authority, although the requirement was eliminated for fiscal year 2016. There is an annual limit on the total cost of construction projects carried out using this authority, presently $100 million. The Secretary of Defense has delegated approval authority for the use of the Contingency Construction Authority to the Under Secretary of Defense Comptroller, who issues updated guidance on requirements and processes for proposed projects. In addition to the contact named above, individuals who made key contributions to this report include Guy LoFaro, Assistant Director; Adam Anguiano; Mae Jones; Michael Shaughnessy; Michael Silver; and John Strong.
|
For about 15 years, DOD has funded “contingency construction” projects to support operations in Iraq and Afghanistan. The range, complexity, and cost of construction vary (e.g., from concrete pads for tents to brick-and-mortar barracks). DOD funds the projects through MILCON or O&M appropriations. Base commanders can use O&M to fund lower cost projects. Senate Report 113-174 includes a provision for GAO to review issues related to military construction in the CENTCOM area of responsibility in support of contingency operations in Iraq and Afghanistan. GAO evaluated, among other things, the extent to which DOD has (1) tracked the universe and cost of all contingency construction projects in support of contingency operations there, (2) developed a process to determine the appropriate level of construction for MILCON-funded contingency construction projects, and (3) developed a process for reevaluating contingency construction projects when missions change. GAO reviewed relevant guidance and project data. Since contingency operations began in Iraq and Afghanistan, the Department of Defense (DOD) has not tracked the universe and cost of all U.S. Central Command (CENTCOM) contingency construction projects supporting operations there. According to senior DOD officials DOD is not required to track all contingency construction projects separately from all other DOD projects, but DOD has been able to generate specific data on MILCON-funded contingency construction projects when requested. Senior DOD officials stated that they were unaware of the magnitude of their use of O&M funds because DOD has not tracked the universe and cost of O&M-funded unspecified minor military construction projects in support of contingency operations. GAO identified O&M-funded construction costs for fiscal years 2009-12 of at least $944 million for 2,202 of these projects in Afghanistan, costs that are significant compared with the $3.9 billion DOD reported as enacted for MILCON-funded projects there in the same period. DOD has routinely used O&M funding to more quickly meet requirements because the MILCON review process can take up to 2 years. However, DOD's use of O&M funding has posed risks. For example: Financial risk: In 2010, DOD identified needed concrete shelters at Bagram Airfield, Afghanistan, staying below the O&M maximum by dividing a single requirement into separate projects. DOD reported in 2015 that it should have used MILCON funds for the shelters, determining that the obligations incurred had exceeded the statutory maximum for O&M-funded unspecified minor military construction projects, resulting in an Antideficiency Act violation. Duplication risk : In 2015, officials at a base in the CENTCOM area of responsibility decided to use O&M funding for temporary facilities for a squadron while in the same year requesting MILCON funding for a permanent facility for the same squadron, which could result in providing the same service to the same beneficiaries. For MILCON-funded contingency construction projects, DOD has guidance used for determining the appropriate level of construction, or building standard, based on the facility's life expectancy requirements, but as of July 2015 had not documented the rationale for such determinations for 11 of the 39 projects in fiscal years 2011-15 that cost over $40 million each. Further, for 8 of the 11 projects, senior DOD officials could not confirm what level of construction the projects represented based on DOD standards aimed at helping to match investments with requirements. Senior DOD officials acknowledged that an absence of such documentation could lead to DOD constructing facilities in excess of requirements because of the resulting lack of communication with those who design and construct the facilities. DOD has not developed a formal process for reevaluating ongoing contingency construction projects when missions change. According to CENTCOM documentation, beginning in November 2011 DOD undertook five rounds of reviews of planned and ongoing projects in Afghanistan anticipating a change in the mission. However, without a requirement for such reviews, DOD risks constructing facilities that may be unneeded to support U.S. forces in the CENTCOM area of responsibility and in future contingencies worldwide. GAO made six recommendations including that DOD track the universe and cost of O&M-funded projects (DOD did not concur), review construction projects to ensure funds were properly used (DOD did not concur), examine approaches to shorten project approval times (DOD partially concurred), document level-of-construction determinations (DOD partially concurred), and require project reviews when missions change (DOD partially concurred). GAO maintains that its recommendations are valid.
|
DOT is working with the automobile industry, state and local transportation agencies, researchers, private sector stakeholders, and others to lead and fund research on connected vehicle technologies to enable safe wireless communications among vehicles, infrastructure, and travelers’ personal communications devices. Connected vehicle technologies include vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) technologies: V2V technologies transmit data between vehicles to enable applications that can warn drivers about potential collisions. Specifically, V2V-equipped cars would emit data on their speed, position, heading, acceleration, size, brake status, and other data (referred to as the “basic safety message”) 10 times per second to the on-board equipment of surrounding vehicles, which would interpret the data and provide warnings to the driver as needed. For example, drivers may receive a forward collision warning when their vehicle is close to colliding with the vehicle in front of them. V2V technologies have a greater range of detection than existing sensor-based crash avoidance technologies available in some new vehicles. NHTSA is pursuing actions to require that vehicle manufacturers install the underlying V2V technologies that would enable V2V applications in new passenger cars and light truck vehicles, and requested comment on this issue in an August 2014 Advanced Notice of Proposed Rulemaking. We reported on V2V technologies in November 2013. Thus, we are not focusing on these technologies in this report. Vehicle-to-infrastructure (V2I) technologies transmit data between vehicles and the road infrastructure to enable a variety of safety, mobility, and environmental applications. V2I applications are designed to avoid or mitigate vehicle crashes, particularly those crash scenarios not addressed by V2V alone, as well as provide mobility and environmental benefits. Unlike V2V, DOT is not considering mandating the deployment of V2I technologies. V2I applications rely on data sent between vehicles and infrastructure to provide alerts and advice to drivers. For example, the Spot Weather Impact Warning application is designed to detect unsafe weather conditions, such as ice or fog, and notify the driver if reduced speed or an alternative route is recommended (see left side of figure 1). DOT is also investigating the development of V2I mobility and environmental applications. For example, the Eco-Approach and Departure at Signalized Intersections application alerts drivers of the most eco-friendly speed for approaching and departing signalized intersections to minimize stop-and- go traffic and idling (see right side of fig. 1), and eco-lanes, combined with eco-speed harmonization, (demonstrated in the following video) would provide speed limit advice to minimize congestion and maintain consistent speeds among vehicles in dedicated lanes. DOT is also pursuing the development of V2I mobility applications that are designed to provide traffic signal priority to certain types of vehicles, such as emergency responders or transit vehicles. In addition, other types of V2I mobility applications could capture data from vehicles and infrastructure (for example, data on current traffic volumes and speed) and relay real-time traffic data to transportation system managers and drivers. For example, after receiving data indicating vehicles on a particular roadway were not moving, transportation system managers could adjust traffic signals in response to the conditions, or alert drivers of alternative routes via dynamic message signs located along the roadway. In addition to receiving alerts via message signs, these applications could also allow drivers to receive warnings through on-board systems or personal devices. Japan has pursued this approach through its ITS Spot V2I initiative, which uses roadside devices located along expressways to simultaneously collect data from vehicles to allow traffic managers to identify congestion, while also providing information to drivers regarding upcoming congestion and alternative routes. To communicate in a connected vehicle environment, vehicles and infrastructure must be equipped with dedicated short-range communications (DSRC), a wireless technology that enables vehicles and infrastructure to transmit and receive messages over a range of about 300 meters (nearly 1,000 feet). As previously noted, V2V- equipped cars emit data on their speed, position, heading, acceleration, size, brake status, and other data (referred to as the “basic safety message”) 10 times per second to the surrounding vehicles and infrastructure. V2I-equipped infrastructure can also transmit data to vehicles, which can be used by on-board applications to issue appropriate warnings to the driver when needed. According to DOT, DSRC is considered critical for safety applications due to its low latency, high reliability, and consistent availability. In addition, DSRC also transmits in a broadcast mode, providing data to all potential users at the same time. Stakeholders and federal agencies have noted that DSRC’s ability to reliably transfer messages between infrastructure and rapidly moving vehicles is an essential component to detecting and preventing potential collisions. DSRC technology uses radiofrequency spectrum to wirelessly send and receive data. The Federal Communications Commission (FCC), which manages spectrum for nonfederal users, including commercial, private, and state and local government users, allocated 75 megahertz (MHz) of spectrum—the 5.850 to 5.925 gigahertz (GHz) band (5.9 GHz band)—for the primary purpose of improving transportation safety and adopted basic technical rules for DSRC operations. However, in response to increased demands for spectrum, FCC has requested comment on allowing other devices to “share” the 5.9 GHz band with DSRC technologies. V2I equipment may vary depending on the location and the type of application being used, although in general, V2I components in the connected vehicle environment include an array of roadside equipment (RSE) that transmits and receives messages with vehicles for the purpose of supporting V2I applications (see figure 2). For example, a V2I- equipped intersection would include: Roadside units (RSU)—a device that operates from a fixed position and transmits data to vehicles. This typically refers to a DSRC radio, which is used for safety-critical applications that cannot tolerate interruption, although DOT has noted that other technologies may be used for non-safety-critical applications. A traffic signal controller that generates the Signal Phase and Timing (SPaT) message, which includes the signal phase (green, yellow, and red) and the minimum and maximum allowable time remaining for the phase for each approach lane to an intersection. The controller transfers that information to the RSU, which broadcasts the message to vehicles. A local or state back office, private operator, or traffic management center that collects and processes aggregated data from the roads and vehicles. As previously noted, these traffic management centers may use aggregated data that is collected from vehicles (speed, location, and trajectory) and stripped of identifying information to gain insights into congestion and road conditions as well. Communications links (such as fiber optic cables or wireless technologies) between roadside equipment and the local or state back office, private operator, or traffic management center. This is typically referred to as the “backhaul network.” Support functions, such as underlying technologies and processes to ensure that the data being transmitted are secure. DOT, state and local transportation agencies, academic researchers, and private sector stakeholders are engaged in a number of efforts to develop and test V2I technologies and applications, as well as to develop the technology and systems that enable V2I applications. DOT’s V2I work is funded through its connected vehicle research program. DOT’s initial connected vehicle research focused on V2I technologies; however, it shifted its focus to V2V technologies because they are projected to produce the majority of connected vehicle safety benefits and they do not require the same level of infrastructure investment as V2I technologies. After conducting much of the research needed to inform its advanced notice of proposed rulemaking to require that vehicle manufacturers install V2V technologies in new passenger cars and light truck vehicles, DOT is now shifting its focus back to V2I technologies, and some of the technical work needed to develop V2V applications has also informed the development of V2I. A number of DOT agencies are involved with the development and deployment of V2I technologies. In addition, private companies have received contracts from DOT to develop the underlying concept of operations and technologies to support V2I applications, and auto manufacturers are collaborating with DOT in its efforts to develop and pilot certain V2I applications and the underlying technologies to support them. State and local transportation agencies, which will ultimately be deploying V2I technologies on their roads, have also pursued efforts to test V2I technologies in real-world settings. However, to date, only small research deployments (such as those described below) have occurred to test V2I technologies: The Safety Pilot Model Deployment: DOT partnered with the University of Michigan Transportation Research Institute to collect data to help estimate the effectiveness of connected vehicle technologies and their benefits in real-world situations. The pilot was conducted in Ann Arbor, Michigan, from August 2012 to February 2014, and included roughly 2,800 V2V-equipped cars, trucks, and buses, as well as roadside V2I equipment placed at 21 intersections, three curve-warning areas, and five freeway sites. While the primary focus was on V2V technologies, the pilot also evaluated V2I technology, such as Signal Phase and Timing (SPaT) technologies. DOT officials stated that it would be releasing six reports with findings from the Safety Pilot in mid to late 2015, although these reports will primarily focus on V2V applications. As of July 2015, DOT has released one report that included an evaluation of how transit bus drivers responded to V2V and V2I warnings, and of how well the test applications performed in providing accurate warnings. The two V2I applications included were a curve speed warning and a warning that alerts the bus driver if pedestrians are in the intended path of the bus when it is turning at an intersection. Connected Vehicle Pooled Fund Study: A group of state transportation agencies, with support from the FHWA, established the Connected Vehicle Pooled Fund Study. The study aims to aid transportation agencies in justifying and promoting the large scale deployment of a connected vehicle environment and applications through modeling, development, engineering, and planning activities. To achieve this goal, the study funds projects that facilitate the field demonstration, deployment, and evaluation of connected vehicle infrastructure and applications. For example, the University of Arizona and the University of California at Berkeley are collaborating on a project to develop and test an intelligent traffic-signal system that could, among other things, provide traffic signal priority for emergency and transit vehicles, and allow pedestrians to request for more time to cross the street. Crash Avoidance Metrics Partners, LLC (CAMP): CAMP—a partnership of auto manufacturers that works to accelerate the development and implementation of crash avoidance countermeasures—established a V2I Consortium that focuses on addressing the technical issues related to V2I. In 2013, DOT awarded a cooperative agreement to CAMP, with a total potential federal share of $45 million, to develop and test V2I safety, mobility, and environmental applications, as well as the underlying technology needed to support the applications, such as security and GPS- positioning technologies. According to an FHWA official, CAMP’s current efforts include developing, testing, and validating up to five V2I safety applications, as well as a prototype for Cooperative Adaptive Cruise Control, an application that uses V2V and V2I technology to automatically maintain the speed of and space between vehicles. In addition to CAMP, automakers have established the Vehicle Infrastructure Integration Consortium, which coordinates with DOT on connected vehicle policy issues, such as interoperability of V2I technologies. Test Beds: DOT, state and local agencies, and universities have established connected vehicle test beds. Test beds provide environments (with equipped vehicles and V2I roadside equipment) that allow stakeholders to create, test, and refine connected vehicle technologies and applications. This includes DOT’s Southeast Michigan Test Bed, which has been in operation since 2007 to provide a real-world setting for developers to test V2I and V2V concepts, applications, technology, and security systems. In addition, state agencies and universities have established their own test beds. For example, the University Transportation Center in Virginia, in collaboration with the Virginia Department of Transportation, established the Northern Virginia Test Bed to develop and test V2I applications, some of which target specific problems—like congestion—along the I-66 corridor. DOT offers guidance on how research efforts can become DOT-affiliated test beds, with the goal of enabling test beds to share design information and lessons learned, as well as to create a common technical platform. According to DOT, there are over 70 affiliated test bed members. The deployment of connected vehicle infrastructure to date has been conducted in test beds in locations such as Arizona, California, Florida, Michigan, New York, and Virginia. Additionally, officials from some of these test beds told us they may apply to the Connected Vehicle Pilot Deployment Program later this year (see below). The Connected Vehicle Pilot Deployment Program: Over the next 5 years, DOT plans to provide up to $100 million in funding for a number of pilot projects that are to design and deploy connected vehicle environments (comprised of various V2I and V2V technologies and applications) to address specific local needs related to safety, mobility, and the environment. As envisioned, there are to be multiple pilot sites with each site having different needs, purposes, and applications. The program solicitation notes that successful elements of the pilot deployments are expected to become permanent operational fixtures in the real-world setting (rather than limited to particular testing facilities), with the goal of creating a foundation for expanded and enhanced connected vehicle deployments. FHWA solicited applications for the pilot program from January through March 2015. According to DOT, the initial set of pilot deployments (Wave 1 award) is expected to begin in Fall 2015, with a second set (Wave 2 award) scheduled to begin in 2017. Pilot deployments are expected to conclude in September 2020. DOT and other stakeholders have worked to provide guidance to help state and local agencies pursue V2I deployments, since it will be up to state and local transportation agencies to voluntarily deploy V2I technologies. In September 2014, FHWA issued and requested comment on draft V2I deployment guidance intended to help transportation agencies make appropriate V2I investment and implementation decisions. For example, the guidance includes information on planning deployments, federal funding that can be used for V2I equipment and operations, technical requirements for equipment and systems, and applicable regulations, among other things. FHWA is updating the guidance and creating complementary guides, best practices, and toolkits, and officials told us they expect the revised guidance to be released by September 2015. In addition, the American Association of State Highway and Transportation Officials (AASHTO), in collaboration with a number of other groups, developed the National Connected Vehicle Field Infrastructure Footprint Analysis. This report provides a variety of information and guidance for state and local agencies interested in V2I implementation, including a description of benefits; various state/local based scenarios for V2I deployments; underlying infrastructure and communications needs; timelines and activities for deployment; estimated costs and workforce requirements; and an identification of challenges that need to be addressed. AASHTO, with support the Institute of Transportation Engineers and the Intelligent Transportation Society of America, is also leading a V2I Deployment Coalition. The Coalition has several proposed objectives: support implementation of FHWA V2I deployment guidance; establish connected vehicle deployment strategies, and support standards development. According to information from the coalition and DOT, the V2I Deployment Coalition will be supported by technical teams drawn from DOT, trade associations, transportation system owners/operators, and auto manufacturers. While early pilot-project deployment of V2I technologies is occurring, V2I technologies are not likely to be extensively deployed in the United States for the next few decades. According to DOT, V2I technologies will likely be slowly deployed in the United States over a 20-year period as existing infrastructure systems are replaced or upgraded. DOT has developed a connected vehicle path to deployment that includes steps such as releasing the final version of FHWA’s V2I deployment guidance for state and local transportation agencies (September 2015), and awarding and evaluating the Connected Vehicle Pilot Deployment Program projects in two phases, with the first phase of awards occurring in September 2015 and evaluation occurring in 2019, and the second phase of awards occurring in September 2017 and evaluation occurring in 2021. In addition, DOT officials noted that V2I will capitalize on V2V, and its deployment will lag behind the V2V rulemaking. NHTSA will issue a final rule specifying whether and when manufacturers will be required to install V2V technologies in new passenger cars and light trucks. In addition, FCC has not made a decision about whether spectrum used by DSRC can be shared with unlicensed devices, which could affect the time frames for V2I deployment. Even after V2I technologies and applications have been developed and evaluated through activities such as the pilot program, it will take time for state and local transportation agencies to deploy the infrastructure needed to provide V2I messages, and for drivers to purchase vehicles or equipment that can receive V2I messages. AASHTO estimated that 20 percent of signalized intersections will be V2I- capable by 2025, and 80 percent of signalized intersections would be V2I- capable by 2040. Similarly, AASHTO estimated that 90 percent of light vehicles would be V2V-equipped by 2040. However, DOT officials noted that environmental and mobility benefits can occur even without widespread market penetration and that other research has indicated certain intersections may be targeted for deployment. Similarly, in its National Connected Vehicle Field Infrastructure Footprint Analysis, AASHTO noted that early deployment of V2I technologies will likely occur at the highest-volume signalized intersections, which could potentially address 50 percent of intersection crashes. See figure 3 for a list of planned events and milestones related to DOT’s path to deployment of connected vehicle technologies. According to experts and industry stakeholders we interviewed, there are a variety of challenges that may affect the deployment of V2I technologies including: (1) ensuring that possible sharing with other wireless users of the radiofrequency spectrum used by V2I communications will not adversely affect V2I technologies’ performance; (2) addressing states’ lack of resources to deploy and maintain V2I technologies; (3) developing technical standards to ensure interoperability between devices and infrastructure; (4) developing and managing a data security system and addressing public perceptions related to privacy; (5) ensuring that drivers respond appropriately to V2I warnings; and (6) addressing the uncertainties related to potential liability issues posed by V2I. DOT is collaborating with the automotive industry and state transportation officials, among others, to identify potential solutions to these challenges. As previously noted, V2I technologies depend on radiofrequency spectrum, which is a limited resource in high demand due in part to the increase in mobile broadband use. To address this issue, the current and past administrations, Congress, FCC, and others have proposed a variety of policy, economic, and technological solutions to support the growing needs of businesses and consumers for fixed and mobile broadband communications by providing access to additional spectrum. One proposed solution, introduced in response to requirements in the Middle Class Tax Relief and Job Creation Act of 2012, would allow unlicensed devices to share the 5.9 GHz band radiofrequency spectrum that had been previously set aside for the use of DSRC-based ITS applications such as V2I and V2V technologies. FCC issued a Notice of Proposed Rulemaking in February 2013 that requested comments on this proposed solution. DOT officials and 17 out of 21 experts we interviewed considered the proposed spectrum sharing a significant challenge to deploying V2I technologies. DSRC systems support safety applications that require the immediate transfer of data between entities (vehicle, infrastructure, or other platforms). According to DOT officials, delays in the transfer of such data due to harmful interference from unlicensed devices may jeopardize crash avoidance capabilities. Experts cited similar concerns, with one state official saying that if they deploy applications and they do not work due to harmful interference, potential users may not accept V2I. Seven experts we interviewed agreed that further testing was needed to determine if sharing would result in harmful interference to DSRC. In addition, DOT officials noted that changing to a shared 5.9 GHz band could impact current V2I research, which is based on the assumption that DSRC systems will have reliable access to the 5.9 GHz wireless spectrum. According to Japanese government officials we interviewed, Japan also considered whether to share its dedicated spectrum with unlicensed devices and decided not to allow sharing of the spectrum used for V2I in the 700 MHz band. According to officials we interviewed, Japan’s Ministry of Internal Affairs and Communications conducted a study to test interference with V2I technologies and mobile phones to determine the impact on reliability and latency in delivering safety messages. Based on these tests, the Japanese government decided not to allow sharing of the spectrum band used for V2I, because sharing could lead to delays or harmful interference with V2I messages. Japanese auto manufacturers we interviewed in Japan supported the decision of the Japanese government to keep the 700 MHz band dedicated to transportation safety uses. According to officials, if latency problems affect the receipt of safety messages, this could degrade the public’s trust, consequently slowing down acceptance of the V2I system in Japan. Since the Notice of Proposed Rulemaking was announced, various organizations have begun efforts to evaluate potential spectrum sharing in the 5.9 GHz band and some have expressed concerns. For example, harmful interference from unlicensed devices sharing the same band could affect the speed at which a V2I message is delivered to a driver. NTIA, which has conducted a study on the subject, identified risks associated with allowing unlicensed devices to operate in the 5.9 GHz band, and concluded that further work was needed to determine whether and how the risks identified can be mitigated. DOT also plans to evaluate the potential for unlicensed device interference with DSRC as discussed below. Given the pending FCC rulemaking decision, DOT, technology firms, and car manufacturers have taken an active role pursuing solutions to spectrum sharing. Specifically, DOT’s fiscal year 2016 budget request included funds for technical analysis to determine whether DSRC can co- exist with the operation of unlicensed wireless services in the same radiofrequency band without undermining safety applications. According to DOT officials, since industry has not yet developed an unlicensed device capable of sharing the spectrum, the agency does not have a specific date for completion of this testing at this time. DOT officials noted, however, that they would work with NTIA in any spectrum-related matter to inform FCC of its testing results. According to FCC officials we spoke with, FCC is currently collecting comments and data from government agencies, industry, and other interested parties and will use this information to inform their decision. For example, since 2013, representatives from Toyota, Denso, CSR Technology, and other firms worked together as part of the Institute of Electrical and Electronics Engineers (IEEE) DSRC Tiger Team to evaluate potential options and technologies that would allow unlicensed devices to use the 5.9 GHz band without causing harmful interference to licensed devices. However, the representatives did not reach an agreement on a unified spectrum- sharing approach. Another ongoing effort from Cisco Systems, the Alliance of Automobile Manufacturers, and the Association of Global Automakers is preparing to test whether unlicensed devices using the “listen, detect and avoid” protocol would be able to share spectrum without causing harmful interference to incumbent DSRC operations. As of September 2015, FCC has not announced a date by which it will make a decision. Because the deployment of V2I technologies will not be mandatory, the decision to invest in these technologies will be up to the states and localities that choose to use them as part of their broader traffic- management efforts. However, many states and localities may lack resources for funding both V2I equipment and the personnel to install, operate, and maintain the technologies. In its report on the costs, benefits, and challenges of V2I deployment by local transportation agencies, the National Cooperative Highway Research Program (NCHRP) noted that many states they interviewed said that their current state budgets are the leanest they have been in years. Furthermore, states are affected because traditional funding sources, such as the Highway Trust Fund, are eroding, and funding is further complicated by the federal government’s current financial condition and fiscal outlook. Consequently, there can be less money for state highway programs that support construction, reconstruction, and improvement of highways and bridges on eligible federal-aid highway routes, as well as for other authorized purposes. According to one stakeholder we interviewed, there have been widespread funding cuts for state DOTs, and many state DOTs must first focus on maintaining the infrastructure and equipment they already have before investing in advanced technologies. Ten experts we interviewed, including six experts from state and local transportation agencies, agreed that the lack of state and local resources will be a significant challenge to deploying V2I technologies. According to one report, without additional federal funding, deploying V2I systems would be difficult. Even if states decide to invest in V2I deployment, states and localities may face difficulties finding the resources necessary to operate and maintain V2I technologies. We have previously found that effectively using intelligent transportation systems, like V2I, depends on agencies’ having the staff and funding resources needed to maintain and operate the technologies. However, a recently released DOT report noted that staffing and information technology resources for maintaining V2I technologies were lacking in most agencies due to low and uncompetitive wage rates and funding constraints at the state and local government levels. Similarly, 12 experts we interviewed stated that states and localities generally lack the resources to hire and train personnel with the technical skills needed to operate and maintain V2I systems. According to FHWA’s draft guidance on V2I deployment, funds are available for the purchase and installation of V2I technologies under various Federal-aid highway programs. In addition, costs that support V2I systems, including maintenance of roadside equipment and related hardware, are eligible in the same way that other Intelligent Transportation System (ITS) equipment and programs are eligible. According to DOT, states have the authority and responsibility to determine the priority for funding V2I systems along with other competing transportation programs. Japan’s V2I systems, which were also voluntarily deployed, were funded in large part by the national government. According to Japan’s National Police Agency, half of the costs for traffic signals were provided by the national government. In addition, according to the National Policy Agency, the Japanese government has invested an estimated $97 million (2014 dollars) in research and development for these systems. Two of the Japanese automakers we interviewed attributed the success of the Japanese V2I system in part to the significant government involvement and financial investment. Furthermore, according to a study on international connected vehicle technologies, Japan’s nationally deployed and funded infrastructure devices allowed for industry partners to test and release connected vehicle technologies. Nineteen of the 21 experts we spoke with reported that establishing technical standards is essential for all connected vehicle programs, including V2I, and will be challenging for a number of reasons. According to DOT, such standards define how systems, products, and components perform, how they can connect, and how they can exchange data to interoperate. DOT further noted that these standards are necessary for connected vehicle technologies to work on different types of vehicles and devices to ensure the integrity and security of their data transmission. As well, current standardization efforts have focused on standardizing the data elements and message sets that are transmitted between vehicles and the infrastructure. Currently, according to DOT officials, DOT and various organizations have worked with the Society for Automotive Engineers (SAE) International to standardize the message sets and associated performance requirements for DSRC (SAE J2735 and J2945), which support a wide variety of V2V and V2I applications. DOT, SAE International, and engineers from auto manufacturers, V2I suppliers, technology firms, and other firms meet to develop high-quality, safe, and cost-effective standards for connected vehicle devices and technologies, according to an expert from a leading industry organization specializing in setting connected vehicle technical standards. This expert also noted that developing consensus around what standards should be instituted could be difficult given the different interests (political, economic, or industry-related) of the many stakeholders involved in developing and deploying V2I technologies. For example, the expert said that developing effective security standards required for these technologies that are also cost-effective for auto manufacturers and government organizations to implement may be difficult. Without common standards, V2I technologies may not be interoperable. DOT has noted that consistent, widely applicable standards and protocols are needed to ensure V2I interoperability across devices and applications. However, ensuring interoperability with a standard set of V2I applications in each state may be particularly challenging because unlike V2V, deployment of V2I technologies will remain voluntary. Consequently, states and localities may choose to deploy a variety of different V2I technologies—or no technologies at all—based on what they deem appropriate for their transportation needs. DOT officials we interviewed recognized that a complete national deployment of V2I technologies may never occur, resulting in a patchwork deployment of different applications in localities and states, although these applications will be required to be interoperable with one another. As a result, V2I deployment may be challenged by the following limitations: Benefits may not be optimized: Four experts we interviewed said that having a standard set of V2I applications in each state would be beneficial for drivers because a consistent deployment of applications could potentially increase benefits. Development of applications may be more limited: AASHTO’s National Connected Vehicle Footprint Analysis argues that the more connected vehicle infrastructure is deployed nationwide using common standards, the more likely applications will be developed to take advantage of new safety, mobility, and environmental opportunities. Drivers may not find the system valuable: One expert from a state agency said without a standard set of V2I applications that allows drivers to use V2I applications seamlessly as they travel from state to state, travelers may lose confidence in the usefulness of the system and choose not to use it. DOT and standardization organizations, such as the Society of Automotive Engineers (SAE) International, are working to develop standards to support DSRC and other V2I communications technologies. The data elements and message sets specified in the SAE standards are suitable not only for use with DSRC but also with other communications technologies such as cellular. According to DOT officials, the department is providing funding support, expert participation, and leadership in multiple standards development organizations to promote consensus on the key standards required to support nationally interoperable V2I and V2V technology deployments. Furthermore, the V2I Deployment Coalition—which includes AASHTO, the Institute of Electrical and Electronic Engineers, and the Institute of Transportation Engineers— intends to lead the effort to develop and support publishing of V2I standards, guidelines, and test specifications to support interoperability. To facilitate standardization among potential state users of V2I technologies, FHWA is currently developing deployment guidance as discussed previously. According to DOT, that guidance will include specifications to ensure interoperability and to assist state and local agencies in making appropriate investment and implementation decisions for those agencies that will deploy, operate, and maintain V2I systems. In addition to developing V2I standards across the United States, five experts we interviewed mentioned the importance of international harmonization for V2I technologies. Auto manufacturer experts recognized the importance of developing standards at both a domestic and international level as cars are manufactured globally. However, this is a challenge because international standardization organizations, including those in Europe and Japan, have different verification and validation processes than the United States, according to an auto manufacturer expert. Furthermore, another expert noted that harmonization of standards is dependent on the country’s or regional government’s regulations, and since there are different views on the role of these regulations in Europe, Japan, and the United States, achieving global standards will be complex. According to DOT, the joint standardization of connected vehicle systems (V2V and V2I) is a core objective of European Union-U.S. cooperation on ITS, and U.S.-Japan staff exchanges have been invaluable in building relationships and facilitating technical exchange, thus creating a strong foundation for ongoing collaboration and research. According to DOT officials, even when identical standards are not viable across multiple countries or regions due to technical or legal differences, maximizing similarities can increase the likelihood that common hardware and software can be used in multiple markets, reducing costs and accelerating deployment. According to officials from one Japanese auto manufacturer we interviewed, developing a standard message set for V2I communications in Japan was a long and challenging process that took over 5 years of discussion among auto manufacturers. According to DOT, for connected vehicle technologies to function safely, security and communications infrastructure need to enable and ensure the trustworthiness of messages between vehicles and infrastructure. The source of each message needs to be trusted and message content needs to be protected from outside interference or attacks on the system’s integrity. A DOT study we reviewed and the majority of the experts we interviewed noted that data security challenges exist and cited challenges that range from securing messages delivered to and from vehicle devices and infrastructure to managing security credentials and associated policies for accessing data and the system. Fourteen of 21 experts we interviewed cited securing data as a significant challenge to the deployment of V2I technologies. For example, experts from 5 states and one local agency that operated V2I test beds told us they were uncertain how vehicle and infrastructure data would be stored and secured for a larger deployment of V2I technologies because they have only tested V2I applications in limited, small-scale deployments. Most of these experts were also unsure whether current data security efforts could be scalable to a larger deployment. According to DOT officials, they are currently researching this area. DOT and industry have taken steps to develop a security framework for all connected vehicle technologies, including V2I. DOT, along with automakers from CAMP, are testing and developing the Security Credential Management System (SCMS) to ensure the basic safety messages are secure and coming from an authorized device. More than half of the experts we interviewed expressed a variety of concerns about (1) the SCMS system, including whether SCMS can ensure a trusted and secure data exchange and (2) who will ultimately manage the system. To solicit input on these issues DOT launched a Request for Information in October 2014 to obtain feedback in developing the organizational and operating structure for SCMS. In our previous work on V2V, we found that as a part of its research on the security system, DOT had identified three potential models—federal, public-private, and private. We previously found that if a federal model were pursued, according to DOT, the federal government would likely pursue a service contract that would include specific provisions to ensure adequate market access, privacy and security controls, and reporting and continuity of services. We also reported that under a public-private partnership, the security system would be jointly owned and managed by the federal government and private entities. At the time of our prior report, DOT officials stated that its legal authority and resources have led NHTSA to focus primarily on working with stakeholders to develop a viable private model, involving a privately owned and operated security-management provider. According to DOT officials, the agency is expanding the scope of its planned policy research to enable the Department to play a more active leadership role in working with V2V and V2I stakeholders to develop and prototype a private, multi-stakeholder organizational model for a V2V SCMS. Officials said that such a model would ensure organizational transparency, fair representation of stakeholders, and permit the federal government to play an ongoing advisory role. A central component of the Department’s planned policy research is the development of policies and procedures that could govern an operational SCMS, including minimum standards to ensure security and appropriately protect consumer privacy. Currently, NHTSA is reviewing comments on the management and organization for SCMS to inform its V2V Notice of Proposed Rulemaking, expected to be submitted for Office of Management and Budget review by the end of 2015. In addition, according to DOT’s Connected Vehicle Pilot Deployment Program request for proposals, participating state and local agencies will utilize SCMS as a tool to support deployment security, which will allow states, local agencies, and private sector firms an opportunity to test capabilities in a real-world setting. Ultimately, when asked about the sufficiency of SCMS, almost half of the experts we interviewed (10 of 21) indicated they were confident that a secure system for V2I could be developed. According to FHWA, a secure system is essential to appropriately protect the privacy of V2I users. Nine of the experts identified privacy as a significant challenge for the deployment of V2I technologies. For example, the public may perceive that their personal information could be exposed or their vehicle could be tracked using connected vehicle technologies. In a connected vehicle environment, various organizations—federal, state, and local agencies; academic organizations; and private sector firms—potentially may have access to data generated by V2I technologies in order to, for example, manage traffic and conduct research. DOT has taken some steps to mitigate security and privacy concerns related to V2V and V2I technologies. According to DOT officials, the safety message will be broadcast in a very limited range (approximately 300 meters) and will not contain any information that identifies a specific driver, owner or vehicle (through vehicle identification numbers or license plate or registration information). The messages transmitted by DSRC devices (such as roadside units) in support of V2V and V2I technologies also will be signed by security credentials that change on a periodic basis (currently expected to be every 5 minutes) to minimize the risk that a third party could use the messages as a basis for tracking the location or path of a specific individual or vehicle. Additionally, with respect to V2I technologies, DOT officials, car manufacturers and V2I suppliers plan to incorporate privacy by design into V2I technologies. Under this approach, according to DOT, V2I data will be aggregated, and anonymized. Also NHTSA is currently in the process of conducting a V2V privacy risk assessment and intends to publish a Privacy Impact Assessment in connection with its V2V Notice of Proposed Rulemaking, which is expected to include an analysis of data collected, transmitted, stored, and disclosed by the V2V system components and other entities in relation to privacy concerns. The Department expects the V2V privacy risk research and the Privacy Impact Assessment to influence the development of policies, including security and privacy policies with regard to V2I. Furthermore, according to DOT, its V2I Deployment Coalition also plans to identify privacy and data issues at the state and county level. According to Japanese officials we interviewed from the Ministry of Land, Infrastructure, Transport, and Tourism (MLIT), Japan took a number of steps to address the security and privacy of its V2I system. First, Japan’s Intelligent Transportation Systems Technology Enhancement Association is responsible for managing the security of their V2I systems, and developed a system that used encryption to maintain security and ensure privacy. More specifically, each vehicle participating in V2I is assigned a changing, random identification number each time the vehicle started, thus making it difficult to track the vehicle over time. MLIT officials also noted that data generated from each vehicle is not stored permanently, but rather saved for distinct time frames depending on its use. Further, MLIT officials stated that security is ensured because V2I information is protected, anonymous, non-identifiable, and not shared with outside organizations; rather, it is used solely for public safety purposes. According to the National Police Agency officials, no significant security issue has occurred with V2I technologies as of July 2015. Because V2I data will initially provide alerts and warning messages to drivers, the ultimate effectiveness of these technologies, especially as it relates to safety, depends on how well drivers respond to the warning messages. In a November 2013 report on V2V technologies, we found that addressing human factors that affect how drivers will respond included (1) minimizing the risk that drivers could become too familiar with or overly reliant upon warnings over time and fail to exercise due diligence in responding to them, (2) assessing the risk that warnings could distract drivers and present new safety issues, and (3) determining what types of warnings will maximize driver response. Seven of the 21 experts we interviewed identified human factors issues as significant to V2I deployment. To address these concerns, DOT is participating in a number of research efforts to determine the effects of new technologies on driver distraction. To further examine the effects on drivers using V2I applications, NHTSA has a research program in place to develop human factors principles that may be used by automobile manufacturers and suppliers as they design and deploy V2I technology and other driver-vehicle interfaces that provide warnings to drivers. In addition, DOT’s ITS-JPO is funding NHTSA and FHWA research to investigate human factors implications for V2I technologies. Furthermore, according to DOT, the Connected Vehicle Pilot Program will allow additional opportunities to review drivers’ reactions to V2I messages using cameras and driver vehicle data on speed, braking, and other metrics. Eleven of the 21 experts we interviewed identified uncertainty related to potential liability in the event of a collision involving vehicles equipped with V2I technologies as a challenge. In our November 2013 report on V2V, an auto manufacturer expert said that it could be harder to determine whether fault for a collision between vehicles equipped with connected vehicle technologies lies with one of the drivers, an automobile manufacturer, the manufacturer of a device, or another party. According to DOT officials, it is unlikely that either V2I or V2V technologies will create significant liability exposure for the automotive industry, as DOT expects auto manufacturers will contractually limit their potential liability for integrated V2I and V2V applications and third-party services. However, according to DOT, V2I applications using data received from public infrastructure may create potential new liability risks to various infrastructure owners and operators—state and local governments, railroads, bridge owners, and roadway owners—because such cases often are brought against public or quasi-public entities and not against vehicle manufacturers. According to DOT, this liability will likely be the same as existing liability for traffic signals and variable message signs. DOT officials, stakeholders representing state officials and private sector entities, and experts we interviewed stated that the deployment of V2I technologies and applications is expected to result in a variety of benefits to users. Experts identified safety, mobility, operational, and environmental benefits as the potential benefits of V2I. Safety: Eleven of 21 experts identified safety as one of the primary benefits of V2I technologies. This included 6 of the 8 state and local agencies we interviewed. According to Japanese officials we interviewed, Japan has realized safety benefits from its deployment of V2I infrastructure. For example, in an effort to prevent rear-end collisions, Japan installed V2I infrastructure that detected and warned motorists of upcoming congestion on an accident-prone curve on an expressway in Tokyo. According to Japanese officials, this combined with other measures such as road marking, led to a 60-percent reduction in rear-end collisions on this curve. Mobility: In interviews, 8 of 21 experts identified mobility as one of the primary benefits of V2I, including 6 of the 8 state and local agencies we interviewed. Officials in three states we interviewed noted that they are focusing on V2I applications that have the potential to increase mobility. These applications could allow for transportation system managers to identify and address congestion in real-time, as well as provide traffic signal priority to certain types of vehicles, such as emergency responders or transit. For example, Japanese officials estimated that as the use of electronic tolling rose to nearly 90 percent of vehicles on expressways, tollgate congestion was nearly eliminated on certain expressways. Operations: In interviews, 7 of 21 experts, including 4 of 8 state and local agencies, identified the potential for V2I applications to provide operational benefits or cost savings. For example, one state agency noted that using data collected from vehicles could allow the transportation managers to more easily monitor pavement conditions and identify potholes (typically a costly and resource-intensive activity). DOT and the National Cooperative Highway Research Program have also noted that the visibility and enhanced data on current traffic and road conditions provided by V2I applications would provide operational benefits to state and local transportation managers. This result, in turn, could provide safety or other benefits to drivers. For example, officials in Japan told us that by using data collected from vehicles through the ITS infrastructure, they were able to identify 160 locations in which drivers were braking suddenly. After investigating the cause, officials took steps to address safety issues at these sites (such as trimming trees that created visual obstructions) and incidents of sudden braking decreased by 70 percent and accidents involving injuries or fatalities decreased 20 percent. In addition, the Japanese government partnered with private industry to collect and analyze vehicle probe data to help the public determine which roads were passable following an earthquake. Environment: Of the experts we interviewed, 4 of 21 identified environmental benefits as a primary benefit of V2I technologies, with some noting interconnections among safety, mobility, and environmental benefits. For example, officials from two state agencies we interviewed stated that improving safety and mobility will lead to environmental benefits because there will be less stop-and-go traffic. Indeed, Japanese officials estimated that decreased tollgate congestion reduced CO2 emissions by approximately 210,000 tons each year. Although V2I applications are being developed for the purpose of providing safety, mobility, operational, and environmental benefits, the extent to which V2I benefits will be realized is currently unclear because of the limited data available and the limited deployment of V2I technologies. To date, only small research deployments have occurred to test connected vehicle technologies. However, DOT has commissioned or conducted some studies to estimate potential V2I benefits, particularly with respect to safety and the environment. NHTSA used existing crash data and estimated that in combination, V2V and V2I could address up to 81 percent of crashes involving unimpaired drivers. Similarly, in 2012, a study commissioned by FHWA used existing crash data and estimated the number, type, and costs of crashes that could be prevented by 12 different V2I applications. This study estimated that the 12 V2I applications would prevent 2.3-million crashes annually (representing 59 percent of single vehicle crashes and 29 percent of multi-vehicle crashes and comprising $202 billion in annual costs). With respect to the environment, DOT contracted with Booz Allen Hamilton to develop an initial benefit-cost analysis for its environmental applications, with the goal of informing DOT’s future work and prioritization of certain applications. As part of the next phase of this work, Booz Allen Hamilton used models to estimate potential benefits of individual applications, as well as their benefits when used in combination with other applications. NCHRP estimated operational and financial benefits that V2I applications may provide to state and local governments, such as reduced costs for crash response and cleanup costs; reduced need for traveler information infrastructure; reduction of infrastructure required to monitor traffic; and lower cost of pavement condition detection. However, one of the study’s major conclusions was that the data required to quantify benefits are generally not available. DOT is taking some steps to evaluate the benefits of V2I applications. For example, as part of its upcoming Connected Vehicle Pilot Deployment Program, pilot projects are expected to develop a performance-monitoring system, establish performance measures, and collect relevant data. Projects will also receive an independent evaluation of their projects’ costs and benefits; user acceptance and satisfaction; and lessons learned. In addition, organizations researching the benefits of V2I have noted that the benefits of V2I deployments may depend on a variety of factors, including the size and location of the deployment, the number of roadside units deployed, the number of vehicles equipped, and the types of applications that are deployed. A study sponsored by the University of Michigan Transportation Research Institute noted that some V2I safety applications require a majority of vehicles to be equipped before reaching optimum effectiveness, in contrast to mobility, road weather, and operations applications, which only require a small percentage of equipped vehicles before realizing benefits. Japanese government officials, as well as representatives from a private company we interviewed in Japan, noted that in some cases, they have found it difficult to quantify benefits. However, DOT and the Ministry of Land, Infrastructure, Transport and Tourism of Japan established an Intelligent Transportation Systems (ITS) Task Force to exchange information and identify the areas for collaborative research to foster the development and deployment of ITS in both the United States and Japan. According to DOT, evaluation tools and methods are high-priority areas for the task force, and DOT has stated that a report detailing the task force’s collaborative research on evaluation tools and methods will be published in 2015. In addition, 8 of the 21 experts we interviewed noted that it can be difficult to identify benefits that are solely attributable to V2I, due to the interconnected nature of V2V and V2I technologies. However, some experts we spoke with provided some examples of how connected vehicle benefits could be measured, including: crash avoidance, reduction in fatalities, reduced congestion, and reduced travel times. The costs for the deployment of a national V2I system are unclear because current cost data for V2I technology are limited due to the small number of test deployments thus far. According to DOT officials, experts, and other industry stakeholders we spoke to, there are two primary resources for estimating V2I deployment costs: AASHTO’s National Connected Vehicle Footprint Analysis (2014) and National Cooperative Highway Research Program’s (NCHRP) 03-101 Costs and Benefits of Public-Sector Deployment of Vehicle-to-Infrastructure Technologies (2013). However, the cost estimates in both reports are based on limited available data from small, research test beds. As a result, neither report contains an estimate for the total cost if V2I were to be deployed at a national level. Despite these limitations, the cost estimates in these two studies are cited by several experts and industry stakeholders, including DOT. According to DOT, these cost figures may be useful to agencies considering early deployments. According to AASHTO and NCHRP, costs of V2I deployment will likely be comprised of two types of costs. First, V2I will require non-recurring costs—the upfront, initial costs required to deploy the infrastructure. According to AASHTO, there are two primary, non-recurring cost categories associated with V2I deployments: Infrastructure deployment costs include the costs for planning, acquiring, and installing the V2I roadside equipment. State and local agencies will need to evaluate the costs for planning and design that may include mapping intersections and deciding where to deploy the DSRC radios based on traffic and safety analyses, according to AASHTO. Deployment costs will include the cost of acquiring the equipment, including the roadside unit. AASHTO estimates that the total equipment costs would be $7,450 per site, with $3,000 attributed to each roadside unit, on average. However, 4 of the experts we interviewed stated that the cost estimates for the hardware are likely to decrease over time, as the technology matures and the market becomes more competitive. The total average cost for installation of the equipment per site includes the costs of labor and inspection. In addition, deployment costs may include the cost of upgrading traffic signal controllers. AASHTO estimates that approximately two thirds of all controllers in the United States will need to be upgraded to support connected vehicle activities. Backhaul costs refer to the costs for establishing connectivity for communication between roadside units and back offices or traffic management centers (TMCs). As discussed, backhaul includes the fiber optic cables connecting traffic signals to the back office, as well as any sensors or relays that link to or serve these components. According to NCHRP, backhaul will be one of the biggest components of costs. In fact, three state agencies and one supplier we spoke with referred to backhaul as a factor that will affect costs for V2I deployment. Backhaul costs are also uncertain because states vary in the extent to which they have existing backhaul. According to AASHTO, some sites may only require an upgrade to their current backhaul system to support expected bandwidth requirements for connected vehicle communications. However, 40 percent of all traffic signals have either no backhaul or will require new systems, according to AASHTO. The difference in cost between tying into an existing fiber-optic backhaul and installing a new fiber-optic backhaul for the sites is significant, according to DOT. The average national cost to upgrade backhaul to a DSRC roadside site is estimated to vary from $3,000, if a site has sufficient backhaul and will only need an upgrade, to $40,000, if the V2I site requires a completely new backhaul system, according to AASHTO estimates. The total potential average, non-recurring costs of deploying connected vehicle infrastructure per site, according to DOT and AASHTO, are $51,650 (see table 1). Second, V2I will also require recurring costs—the costs required to operate and maintain the infrastructure. According to AASHTO, there are several types of recurring costs associated with V2I deployments, including equipment maintenance and replacement, security, and personnel costs. The amount of maintenance needed to keep roadside units running is unclear, according to 3 of the experts we interviewed, because the test bed deployments have generally not operated long enough to warrant maintenance of the equipment. However, NCHRP estimates that routine maintenance costs for roadside units would likely vary from 2 to 5 percent of the original hardware and labor costs. This includes such maintenance as realigning antennas and rebooting hardware. AASHTO also estimates that the device would need replacing every 5 to 10 years. In addition, states and localities may also need to hire new personnel or train existing staff to operate these systems. According to AASHTO, personnel costs will also depend on the size of the deployment as smaller deployments may not need dedicated personnel to complete maintenance, while large deployments may require staff dedicated to system monitoring on site or on call. Furthermore, security costs will be a recurring cost and include the costs of keeping the security credentials of the SCMS up to date and the costs to manage the security system, according to AASHTO. Given that SCMS is still being developed, cost estimates are unknown. One car manufacturer we interviewed explained that because the management of the security system is unknown, it is extremely challenging to estimate future costs. In addition, one county agency official said security costs could greatly affect the total costs for V2I deployment because the requirements and funding responsibility are not clearly defined. As part of its ANPRM, NHTSA conducted an assessment of preliminary V2V costs, including costs for the SCMS. NHTSA estimated that the SCMS costs per vehicle range from $1 to $6, with an average of $3.14. SCMS costs will increase over time due to the need to support an increasing number of vehicles with the V2V technologies, according to NHTSA. While AASHTO and NCHRP have estimated the above potential average costs for various components associated with a V2I deployment, 10 of 21 experts stated that it is difficult to determine the actual costs for a V2I deployment in a particular state or locality due to a number of factors. First, the scope of the deployment will affect the total costs of a region’s V2I deployment, according to NCHRP, because it will determine the amount of equipment needed for the system to function, including the number of roadside units. Previous test bed deployments have varied in size ranging from 1 to 2,680 DSRC roadside units. Further, the number of devices needed will be dependent on how many devices are required to enable the applications. For example, while a curve-speed-warning application may require installing equipment at a specific location, applications that aim to mitigate congestion by advising drivers of the best speed to approach an intersection may need to be installed at several intersections throughout an urban corridor. One state agency said that one factor that could affect costs is how often roadside equipment needs to be replaced in order to enable certain V2I applications. In addition, as previously mentioned, the size of the deployment will contribute to personnel costs. Second, the state or locality’s deployment environment will affect its deployment costs. One state agency pointed out that everyone’s costs will be different because they will be deploying in environments with differing levels of existing infrastructure. For example, as previously noted, the region’s existing backhaul infrastructure will determine the extent of the cost for installing or upgrading the region’s system, including whether a city or state has fiber optics already installed or signal controllers need upgrading. Lastly, the maturity of the technology will also affect cost estimates for equipment such as a DSRC radio. Estimating equipment costs is difficult at this time because the technology is still developing, according to NCHRP. Ten of the 21 experts we interviewed, including all of the state agencies, also mentioned that estimating costs is challenging because the technology is still immature. Furthermore, the reports and 4 experts we interviewed agree that the cost estimates for the hardware are likely to decrease over time, as the technology matures and the market becomes more competitive. As part of the upcoming Connected Vehicle Pilot Deployment Program, DOT developed the Cost Overview for Planning Ideas and Logical Organization Tool (CO-PILOT). This tool generates high-level cost estimates for 56 V2I applications based on AASHTO’s estimations. In addition, according to DOT, the agency will work with AASHTO to develop a life-cycle cost tool that agencies can use to support V2I deployment beyond the Connected Vehicle Pilot Deployment Program. DOT officials also indicated that they plan to update the tool over time as more data are collected from the Connected Vehicle Pilot Deployment Program, and they expect the tool to be available for use by 2016. Also, as previously mentioned, FHWA is developing deployment guidance that will outline potential sources of funding for states and localities, among other things. We provided a draft of this product to the Secretary of Transportation, Secretary of Commerce, and the Chairman of the FCC, for review and comment. DOT and Commerce’s NTIA both provided comments via email that were technical in nature. We incorporated these comments as appropriate. FCC did not provide comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will send copies of this report to the Secretary of Transportation, the Chairman of the Federal Communications Commission, and the Administrator of the National Telecommunications and Information Administration and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To address all of our objectives, we reviewed documentation relevant to vehicle-to-infrastructure (V2I) technology research efforts of the Department of Transportation (DOT), state and local government, and automobile industry, such as DOT’s 2015 Federal Highway Administration V2I Draft Deployment Guidance and Products and AASHTO’s National Connected Vehicle Field Infrastructure Footprint Analysis, as well as documentation on completed and ongoing research. We interviewed officials from DOT’s Office of the Assistant Secretary for Research and Technology, Intelligent Transportation Systems-Joint Program Office (ITS-JPO), Federal Highway Administration (FHWA), National Highway Traffic Safety Administration (NHTSA), and the Volpe National Transportation Systems Center, about these efforts. For all objectives, we developed a structured set of questions for our interviews with 21 experts who represented domestic automobile manufacturers, V2I device suppliers, state and local government, privacy experts, standardization organizations, and academic researchers with relevant expertise. The identified experts have varying degrees of expertise in the following areas related to V2I technology: the production of passenger vehicles; technology development; technology deployment; data privacy; security; state agency deployment; and legal and policy issues. Our starting point for our expert selection was a list of experts originally created in January 2013 by the National Academy of Sciences for GAO’s vehicle-to-vehicle (V2V) report. We used this list for our initial selection because V2V and V2I technologies are both connected vehicle technologies with many similarities, and many V2V stakeholders are also working on V2I. In addition to nine experts we selected from the National Academy of Sciences list, we selected an additional 12 experts based on the following factors: 1. their personal involvement in the deployment of V2I technologies; 2. recommendations from federal agencies (DOT, and the Federal Communications Commission (FCC) and associations (such as the American Association of State and Highway Transportation Officials (AASHTO); and 3. experts’ involvement in professional affiliations such as a V2I consortium or groups dedicated to these technologies or to a specific challenge affecting V2I (e.g., privacy). Table 2 lists the experts we selected. In conducting our structured interviews, we used a standardized interview to ensure that we asked all of the experts the same questions. During these interviews we asked, among other things, for expert views on the state of development and deployment of V2I technologies (including DOT’s role in this process), the potential benefits of V2I technologies, and their potential costs. We also asked for each expert’s views on a number of defined potential challenges facing the deployment of V2I technologies, and asked the experts to rate the significance of each challenge using a three-point scale (significant challenge, moderate challenge, or slight challenge). We determined this list of potential challenges after initial interviews with DOT, industry associations, and other interest groups knowledgeable about V2I technologies. Prior to conducting the interviews, we tested the structured interview with one association to ensure our questions were worded appropriately. After conducting these structured interviews, we summarized expert responses relevant to each objective. The viewpoints gathered through our expert interviews represent the viewpoints of individuals interviewed and cannot be generalized to a broader population. For the purpose of this review, state and local agency officials were considered experts because of their experience in deploying and testing V2I technologies, and experience working with the required technologies (DSRC equipment and software), decision process (funding and scheduling); personnel requirements and skill sets needed for deployment; operations and maintenance. We specifically included six officials who deployed V2I test beds in their respective states in our pool of expert interviews. We also included two officials who studied V2I for several years, had taken part in the AASHTO’s Connected Vehicle group, and had applied to DOT’s prior Connected Vehicles Pilot Program (V2I test bed). We also interviewed additional officials who have contributed to the U.S. efforts to develop and deploy connected vehicle technologies—officials who we refer to as “stakeholders.” Specifically, we used these stakeholders to help us understand issues that informed our structured set of questions, but did not administer the structured question set during these stakeholder interviews. We primarily selected stakeholders based on recommendations from DOT and industry associations. However, we also included DOT as a stakeholder in the deployment of V2I technologies because it is leading federal V2I efforts. We interviewed officials from 17 V2I stakeholder organizations including: 1. DOT, NHTSA 2. DOT, Office of the Assistant Secretary for Research and Technology, 3. DOT, FHWA 4. DOT, Volpe National Transportation Systems Center 5. DOT, Chief Privacy Officer 6. National Telecommunications and Information Administration (NTIA) 9. Intelligent Transportation Society of America (ITS America) 10. Crash Avoidance Metrics Partners, LLC (CAMP) 11. Institute of Electrical and Electronics Engineers (IEEE) 12. National Cooperative Highway Research Program (NCHRP) 13. Leidos, previously known as Science Applications International Corporation (SAIC) 14. Virginia Tech Transportation Institute 15. Virginia Department of Transportation 16. Minnesota Department of Transportation 17. Road Commission for Oakland County, Michigan To determine the status of development and deployment of V2I technology, we interviewed officials from DOT, including the Office of the Assistant Secretary for Research and Technology, ITS-JPO, FHWA, Volpe National Transportation Systems Center, and the NHTSA. We also interviewed officials at all seven V2I test beds located in Virginia, Michigan, Florida, Arizona, California, and New York. We conducted site visits to three test beds—the Safety Pilot in Ann Arbor, Michigan, and the test beds in Southeast Michigan and Northern Virginia. We selected the three site visit locations based on which had the most advanced technology according to DOT and state officials. At these site visits, we conducted interviews with officials from state and local transportation agencies and academic researchers to collect information on developing and deploying V2I technology. We visited FHWA’s Turner Fairbank Highway Research Center in Virginia to understand the agency’s connected vehicle research efforts. We reviewed documentation of the efforts of DOT and automobile manufacturers related to vehicle-to- infrastructure (V2I) technologies, such as the 2015 FHWA’s V2I Draft Deployment Guidance and Products and documentation on completed and ongoing research. We identified materials published in the past 4 years that were related to the terms “vehicle-to-infrastructure” and “V2I” through searches of bibliographic databases, including Transportation Research International Documentation and WorldCat. While a variety of V2I technologies exist for transit and commercial vehicles, for the purpose of this report we limited our scope to passenger vehicles since much of DOT’s connected vehicle work is focused on passenger vehicles. To determine the challenges affecting the deployment of V2I technology and DOT’s existing or planned actions to address potential challenges, we reviewed FHWA’s V2I draft guidance to assist in planning for future investments and deployment of V2I systems. In addition, we interviewed officials from FCC and NTIA about challenges related to the potential for spectrum sharing in the 5.9 GHz band. We interviewed DOT’s Privacy Officer, two privacy experts, and several stakeholders to understand privacy concerns regarding the deployment of V2I technologies. We collected information on anticipated benefits of these technologies through interviews with officials from DOT, automobile manufacturers, industry associations, and experts identified by National Academy of Sciences and other stakeholders, and through reviews of studies they provided. To specifically address the potential costs associated with V2I technologies, we analyzed two reports, AASHTO’s National Connected Vehicle Field Infrastructure Footprint Analysis and NCHRP’s 03-101, Cost and Benefits of Public-Sector Deployment of Vehicle-to-Infrastructure Technologies report, both of which addressed acquisition, installation, backhaul, operations, and maintenance costs. According to DOT officials and other stakeholders we interviewed, those two reports were the primary sources of information for V2I potential deployment costs estimates and actual costs. We used V2I costs estimates from the AASHTO Footprint Analysis to give examples of potential costs for deployment. To further assess the reliability of the cost estimates, in addition to our own review of the two reports, our internal economic stakeholder also independently reviewed both reports, and we subsequently interviewed representatives from AASHTO and NCHRP to verify the scope and methodology of the cost analyses performed in both reports. In addition, we discussed estimated costs and factors that affected costs for V2I investments with experts and stakeholders from federal, local, state government, academia, car manufacturers, industry associations, and V2I suppliers. We determined that the actual cost figures were reliable and suitable for the purpose of our report. In addition to the above work, we selected Japan for a site visit because of its nationwide deployment and years of experience with deployment and maintenance of V2I technologies. Japan has led efforts in V2I technology development and deployment for over two decades. The country serves as an illustrative example from which to draw information on potential benefits, costs, and challenges of deploying V2I technologies in the United States. During our site visit, we interviewed Japanese government officials and auto manufacturers on similar topics that we discussed with U.S. experts, including V2I deployment efforts, benefits, costs, and challenges. Cabinet Secretariat (IT Strategy) Cabinet Office (Council for Science and Technology Policy) Ministry of Land, Infrastructure, Transport and Tourism (MLIT) o Road Bureau o Road Transport Bureau Ministry of Internal Affairs and Communications (MIC) National Police Agency (NPA) Ministry of Economy, Trade and Industry (METI) We conducted this performance audit from July 2014 through September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As part of our review, we conducted 21 structured interviews with individuals identified by the National Academy of Sciences and based on other factors discussed in our scope and methodology to be experts on vehicle-to-infrastructure (V2I) technologies (see table 2 in app. I for list of experts interviewed). During these interviews we asked, among other things, for each expert’s views on a number of already defined potential challenges facing the deployment of V2I technologies. The ratings provided by the experts for each of the potential challenges discussed are shown in table 3 below. To inform our discussion of the challenges facing the deployment of V2I technologies, we considered these ratings as well as experts’ responses to open-ended questions. In addition to the contact named above, Susan Zimmerman, Assistant Director; Nelsie Alcoser; David Hooper; Crystal Huggins; Amber Keyser; Nancy Santucci; Terence Lam; Josh Ormond; Amy Rosewarne; and Elizabeth Wood made key contributions to this report.
|
Over the past two decades, automobile crash-related fatality and injury rates have declined over 34 and 40 percent respectively, due in part to improvements in automobile safety. To further improve traffic safety and provide other transportation benefits, DOT is promoting the development of V2I technologies. Among other things, V2I technologies would allow roadside devices and vehicles to communicate and alert drivers of potential safety issues, such as if they are about to run a red light. GAO was asked to review V2I deployment. This report addresses: (1) the status of V2I technologies; (2) challenges that could affect the deployment of V2I technologies, and DOT efforts to address these challenges; and (3) what is known about the potential benefits and costs of V2I technologies. GAO reviewed documentation on V2I from DOT, automobile manufacturers, industry associations, and state and local agencies. In addition, GAO interviewed DOT, Federal Communication Commission (FCC), and National Telecommunications Information Administration (NTIA) officials. GAO also conducted structured interviews with 21 experts from a variety of subject areas related to V2I. The experts were chosen based on recommendations from the National Academy of Sciences and other factors. DOT, NTIA, and the FCC reviewed a draft of this report. DOT and NTIA provided technical comments, which were incorporated as appropriate. FCC did not provide comments. Vehicle-to-infrastructure (V2I) technologies allow roadside devices to communicate with vehicles and warn drivers of safety issues; however, these technologies are still developing. According to the Department of Transportation (DOT), extensive deployment may occur over the next few decades. DOT, state, and local-transportation agencies; researchers; and private-sector stakeholders are developing and testing V2I technologies through test beds and pilot deployments. Over the next 5 years, DOT plans to provide up to $100 million through its Connected Vehicle pilot program for projects that will deploy V2I technologies in real-world settings. DOT and other stakeholders have also provided guidance to help state and local agencies pursue V2I deployments, since it will be up to these agencies to voluntarily deploy V2I technologies. According to experts and industry stakeholders GAO interviewed, there are a variety of challenges that may affect the deployment of V2I technologies including: (1) ensuring that possible sharing with other wireless users of the radio-frequency spectrum used by V2I communications will not adversely affect V2I technologies' performance; (2) addressing states and local agencies' lack of resources to deploy and maintain V2I technologies; (3) developing technical standards to ensure interoperability; (4) developing and managing data security and addressing public perceptions related to privacy; (5) ensuring that drivers respond appropriately to V2I warnings; and (6) addressing the uncertainties related to potential liability issues posed by V2I. DOT is collaborating with the automotive industry and state transportation officials, among others, to identify potential solutions to these challenges. The full extent of V2I technologies' benefits and costs is unclear because test deployments have been limited thus far; however, DOT has supported initial research into the potential benefits and costs. Experts GAO spoke to and research GAO reviewed indicate that V2I technologies could provide safety, mobility, environmental, and operational benefits, for example by: (1) alerting drivers to potential dangers, (2) allowing agencies to monitor and address congestion, and (3) providing driving and route advice. V2I costs will include the initial non-recurring costs to deploy the infrastructure and the recurring costs to operate and maintain the infrastructure. While some organizations have estimated the potential average costs for V2I deployments, actual costs will depend on a variety of factors, including where the technology is installed, and how much additional infrastructure is needed to support the V2I equipment.
|
Medicare covered approximately 54 million beneficiaries in fiscal year 2014 at an estimated cost of $603 billion. The program consists of four parts, Parts A through D. In general, Part A covers hospital and other inpatient stays, and Part B covers hospital outpatient and physician services, durable medical equipment, and other services. Together, Parts A and B are known as traditional Medicare or Medicare fee-for- service. Part C is Medicare Advantage, under which beneficiaries receive their Medicare health benefits through private health plans, and Part D is the Medicare outpatient prescription drug benefit, which is administered through private drug plans. Medicare beneficiaries who enroll in Part C or Part D plans receive separate cards from those plans, in addition to their traditional Medicare card. Generally, an individual’s eligibility to participate in Medicare is initially determined by the Social Security Administration, based on factors such as age, work history, contributions made to the programs through payroll deductions, and disability. Once the Social Security Administration determines that an individual is eligible, it provides information about the individual to CMS, which prints and issues a paper Medicare card to the beneficiary. Providers must apply to enroll in Medicare to become eligible to bill for services or supplies provided to Medicare beneficiaries. CMS has enrollment standards and screening procedures in place that are designed to ensure that only qualified providers can enroll in the program and to prevent enrollment by entities that might attempt to defraud Medicare.providers bill Medicare by submitting claims for reimbursement for the services and supplies they provide to beneficiaries. Providers are not issued identification cards, but instead use an assigned unique provider Under Medicare fee-for-service, identification number—their National Provider Identifier (NPI) number—on each claim. Electronically readable cards could be implemented for a number of different purposes in Medicare. We identified three key proposed uses: Authenticating beneficiary and provider presence at the point of care. Beneficiary and provider cards could be used for authentication to potentially help limit certain types of Medicare fraud, as CMS could use records of the cards being swiped to verify that they were present at the point of care. Using electronically readable cards for authentication would not necessarily involve both beneficiaries and providers, as cards could be used solely to authenticate beneficiaries, or solely to authenticate providers. Electronically exchanging beneficiary medical information. Beneficiary cards could be used to store and exchange medical information, such as electronic health records, beneficiary medical conditions, and emergency care information, such as allergies. Provider cards could also be used as a means to authenticate providers accessing electronic health record (EHR) systems that store and electronically exchange beneficiary health information. Electronically conveying beneficiary identity and insurance information to providers. Beneficiary cards could be used to auto- populate beneficiary information into provider IT systems and to automatically retrieve existing beneficiary records from provider IT systems. For example, an electronically readable Medicare beneficiary card could contain the identity and insurance information printed on the current paper Medicare cards—beneficiary name, Medicare number, gender, Medicare benefits, and effective date of Medicare coverage. The primary purpose of this potential use would be to improve provider record keeping by allowing providers the option to capture beneficiary information electronically. The use of electronically readable cards for health care has been limited thus far in the United States. According to stakeholders, the limited use is due, in part, to reluctance among the insurance industry and health care providers to invest in a technology that would depend on a significant investment from both parties to implement. However, some health insurers, including a large insurer, have issued electronically readable cards to their beneficiaries, and some integrated health systems have issued cards to patients to help manage patient clinical and administrative information.cards have been used as health insurance cards for decades. For example, France and Germany have used smart cards in their health care systems since the 1990s. Appendix II includes additional details about France’s and Germany’s use of smart cards. Although there is no reliable measure of the extent of fraud in the Medicare program, for over two decades we have documented ways in which fraud contributes to Medicare’s fiscal problems. Preventing Medicare fraud and ensuring that payments for services and supplies are accurate can be complicated, especially since fraud can be difficult to detect because those involved are generally engaged in intentional deception. Common health care fraud schemes in Medicare include the following: Billing for services not rendered. This can include providers billing for services and supplies for beneficiaries who were never seen or rendered care, and billing for services not rendered to beneficiaries who are provided care (such as adding a service that was not provided to a claim for otherwise legitimately provided services). In some types of fraud schemes, individuals may steal a provider’s identity and submit claims for services never rendered and divert the reimbursements without the provider’s knowledge. Fraudulent or abusive billing practices. This can include providers billing Medicare more than once for the same service; inappropriately billing Medicare and another payer for the same service; upcoding of services; unbundling of services; billing for noncovered services as covered services; billing for medically unnecessary services; and billing for services that were performed by an unqualified individual, or misrepresenting the credentials of the person who provided the services. Kickbacks. This can include providers, provider associates, or beneficiaries knowingly and willfully offering, paying, soliciting, or receiving anything of value to induce or reward referrals or payments for services or goods under Medicare. Among other processes, to detect potential fraud, CMS employs IT systems—including its Fraud Prevention System—that analyze claims submitted over a period of time to detect patterns of suspicious billing.CMS and its contractors investigate providers and beneficiaries with suspicious billing and utilization patterns and, in suspected cases of fraud, can take administrative actions, such as suspending payments or revoking a provider’s billing privileges, or refer the investigation to the HHS Office of Inspector General for further examination and possible criminal or civil prosecution. As we have previously reported, there are three potential factors that can be used to authenticate an individual’s identity: (1) “something they possess,” such as a card, (2) “something they know,” such as a password or personal identification number (PIN), and (3) “something they are,” such as biometric information, for example, a fingerprint, or a picture ID.Generally, the more factors that are used to authenticate an individual’s identity, the higher the level of identity assurance. For example, a card used in conjunction with a PIN provides a higher level of identity authentication than just a card, since the PIN makes it more difficult for individuals who are not the cardholder to use a lost or stolen card. NIST has issued standards for federal agencies for using electronically readable cards to achieve a high level of authentication, and those standards require robust enrollment and card issuance processes to ensure that the cards are issued to the correct individuals. These processes include procedures to verify an individual’s identity prior to card issuance to ensure eligibility and to ensure that the cards are issued to the correct individual. For example, verifying an individual’s address is an important practice for issuing cards by mail. If a significant number of cards are issued to ineligible or incorrect individuals, it undermines the utility of the cards for identity authentication. Practices that provide higher levels of identity authentication generally are more expensive and difficult to implement and maintain and may cause greater inconvenience to users than practices that provide lower levels of assurance. The level of identity authentication that is appropriate for a given application or transaction depends on the risks associated with the application or transaction. The greater the determined risk, the greater the need for higher-level identity authentication practices. The Office of Management and Budget and NIST have issued guidance defining four levels of identity assurance ranging from level 1—“little or no confidence in the asserted identity’s validity”—to level four—“very high confidence in the asserted identity’s validity”—and directed agencies to use risk-based methods to decide which level of authentication is appropriate for any given application or transaction. Additionally, authentication practices should take into account issues related to cost and user acceptability. CMS currently relies on providers to authenticate the identities of Medicare beneficiaries to whom they are providing care, but the agency does not have a way to verify whether beneficiaries and providers were actually present at the point of care when processing claims. At this point, CMS has not made a determination that a higher level of beneficiary and provider authentication is needed. The type of electronically readable card most appropriate for Medicare would depend on how the cards would be used. Three common types of electronically readable cards that could be used to replace the current printed Medicare card are smart cards, magnetic stripe cards, and bar code cards. The key distinguishing feature of smart cards is that they contain a microprocessor chip that can both store and process data, much like a very basic computer. Based on our analysis of the capability of the three types of cards, we found that while all of the cards could be used for authentication, storing and exchanging medical information, and conveying beneficiary information, the ability of smart cards to process data enables them to provide higher levels of authentication and better secure information than cards with magnetic stripes and bar codes. Our analysis found that smart cards could provide substantially more rigorous authentication of the identities of Medicare beneficiaries and providers than magnetic stripe or bar code cards (see fig. 1). Although all three types of electronically readable cards could be used for authentication, smart cards provide a higher level of assurance in their authenticity because they are difficult to counterfeit or copy. Magnetic stripe and bar code cards, on the other hand, are easily counterfeited or For example, officials in France told us that they chose to use copied.smart cards as their health insurance cards, in part, because they were less susceptible to counterfeiting, and reported that they have not encountered any problems with counterfeit cards. Additionally, smart cards can be implemented with a public key infrastructure (PKI)—a system that uses encryption and decryption techniques to secure information and transactions—to authenticate the cards and ensure the data on the cards have not been altered. All three types of cards could be used in conjunction with other authentication factors, such as a PIN or biometric information, to achieve a higher level of authentication. However, only smart cards are capable of performing on-card verification of other authentication factors. For example, smart cards can verify whether a user provides a correct PIN or can confirm a fingerprint match, without being connected to a separate IT system. Cards with magnetic stripes and barcodes cannot perform such on-card verification, and require a connection to a separate IT system to verify PINs or biometric information. We also determined that using electronically readable cards to store and exchange medical information would likely require the use of smart cards given their storage capacity and security features. Smart cards have a significantly greater storage capacity than magnetic stripe and bar code cards, and would be able to store more extensive medical information on the cards. However, the storage on smart cards is limited, so it is unlikely that the cards would be able to store all of a beneficiary’s medical records or medical records of a larger file size, such as medical images. In addition, smart cards could better secure confidential information, including individually identifiable health information subject to protection under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Smart cards can be implemented with PKI to perform public key encryption and authentication to secure and securely transmit any medical information on the card. Smart cards’ ability to perform on-card verification can also be used to limit access to information on the cards to better ensure that information is not accessed inappropriately. For example, beneficiaries could be required to enter a PIN for providers to access medical information on the card, while access to nonsensitive information could be allowed without beneficiaries entering a PIN. Our analysis also found that any of the three types of electronically readable cards could be used to convey beneficiary identity and insurance information to providers. Each type of card has adequate storage capacity to contain such information, and storing this type of information may not require cards with processing capabilities or security features. If beneficiary SSNs continue to serve as the main component of Medicare numbers, cards with security features would be needed to reduce the risk of identity theft. Using electronically readable cards to authenticate beneficiary and provider presence at the point of care could potentially curtail certain types of Medicare fraud, but would have limited effect since CMS has stated that it would continue to pay claims regardless of whether a card was used. Using electronically readable cards to store and exchange medical records is not part of current federal efforts to facilitate health information exchange and would likely present challenges. Using electronically readable cards to convey identity and insurance information to auto-populate and retrieve information from provider IT systems could reduce errors in the reimbursement process and improve medical record keeping. Using electronically readable cards to authenticate beneficiary and provider presence at the point of care could potentially limit certain types of Medicare fraud. However, we could not determine the extent to which authenticating beneficiaries and providers at the point of care could limit fraud because there is no reliable estimate of the extent or total dollar value associated with specific types of Medicare fraud schemes. Stakeholders told us that authenticating beneficiaries at the point of care could potentially limit schemes in which Medicare providers misuse beneficiary Medicare numbers to bill fraudulently for services. In such schemes, providers use beneficiary Medicare numbers to bill on their behalf without having ever seen or rendered care to the beneficiaries. As of May 2014, CMS was aware of 284,000 Medicare beneficiary numbers that had been compromised and potentially used to submit fraudulent claims. Stakeholders also told us that authenticating providers at the point of care could potentially limit fraud schemes in which individuals or companies misuse an unknowing provider’s Medicare enrollment information to submit claims and divert stolen reimbursements. Adding another authentication factor, such as a PIN or a biometric factor, to a beneficiary’s card also could limit the potential for individuals to use a stolen Medicare card to obtain care or bill for services. For example, individuals attempting to use a stolen card could not pose as a beneficiary or bill for services on behalf of a beneficiary without knowing the beneficiary’s PIN. Beneficiaries would still be able to lend their card to others and tell them their PIN, though replicating a biometric factor would be more difficult. beneficiaries that were never seen or rendered care: Two owners of a home health agency paid kickbacks to obtain information on Medicare beneficiaries and used the information to bill for home health care services that were not actually rendered. services: A doctor performing surgeries on beneficiaries billed Medicare for individual steps involved in the surgeries, rather than the entire procedure to fraudulently increase reimbursements. services as covered services: The owner of a medical transport company provided beneficiaries with routine, nonemergency transportation services not covered by Medicare, but billed Medicare for emergency ambulance transportation, which is covered by Medicare. told us that requiring cards to be used would not be feasible because of concerns that doing so would limit beneficiaries’ access to care. Specifically, CMS officials told us the agency would not want to make access to Medicare benefits dependent on beneficiaries having their card at the point of care. According to CMS officials and stakeholders, there are legitimate reasons why a card may not be present at the point of care, such as when beneficiaries or providers forget their cards or during a medical emergency. Because CMS has indicated that it would still process and pay for these claims, providers submitting potentially fraudulent claims could simply not use the cards at the point of care. Some stakeholders noted that CMS could mitigate the risk of paying claims in which cards are not used by using its Fraud Prevention System or other IT systems to identify and investigate providers with suspicious billing patterns related to card use. For example, such systems could identify providers that submit an abnormally high percentage of claims in which cards are not used, which could be indicative of claims for beneficiaries who were never seen or rendered care. However, CMS officials noted that they already use their IT systems to identify providers that bill for services for beneficiaries who were never seen or rendered care. For example, CMS analyzes billing patterns to identify and conduct postpayment investigations into providers that submit an abnormal number of claims for beneficiaries with known compromised numbers. Provider paid or received kickbacks for beneficiary referrals for specific services, or for the purchase of goods or services that may be paid for by Medicare: The operator of a home health agency paid illegal kickbacks to physicians to refer beneficiaries who were not homebound or who otherwise did not qualify for home health services, resulting in fraudulent Medicare billing for home health services. kickbacks to allow provider to fraudulently bill for services: Two beneficiaries solicited and received kickbacks to serve as patients for a home health agency that fraudulently billed Medicare for physical therapy services. According to stakeholders, the use of electronically readable beneficiary cards would also have little effect on many other potentially fraudulent and abusive provider billing practices. For example, use of the cards would not prevent providers from mischaracterizing services, billing for medically unnecessary services, or adding a service that was not provided to a claim for otherwise legitimate services because such fraud does not involve issues related to authentication. Instead, these types of fraud typically involve providers that wrongly bill Medicare for the care provided, or misrepresent the level or nature of the care provided. The use of electronically readable beneficiary and provider cards would also have little effect on preventing fraud that involves collusion between providers and beneficiaries because complicit beneficiaries, including those who receive kickbacks, would likely allow their cards to be misused. Officials we spoke with in France and Germany told us that the use of electronically readable cards has not limited certain types of fraud. Officials from provider organizations and an insurance organization in Germany told us that the use of beneficiary cards does not prevent providers from fraudulently adding services that they never provided onto otherwise legitimate claims. In addition, officials from France noted that certain elderly or infirm beneficiaries may need to rely on providers to maintain custody of and use their cards, and there had been instances of providers and caretakers misusing beneficiary cards in such cases. For example, officials from an insurance organization in France noted that nurses and caretakers of elderly patients have stolen patient cards and allowed other providers to misuse them. Finally, there are also concerns that the use of an electronically readable card could introduce new types of fraud and ways for individuals to illegally access Medicare beneficiary data. For example, CMS officials said that malicious software written onto an electronically readable card could be used to compromise provider IT systems. In addition, CMS officials noted that individuals could illicitly access beneficiary information through “card skimming.” However, Medicare beneficiary data in provider IT systems may already be vulnerable to illegal access and use. Using electronically readable cards to store and exchange beneficiary medical information is not part of current federal efforts to facilitate electronic health information exchange and would likely present challenges. To help improve health care quality, efficiency, and patient safety, the Medicare EHR Incentive Program provides financial incentives for Medicare providers to increase the use of EHR technology to, among other things, exchange patient medical information electronically with other providers. In addition, ONC has funded health information exchange organizations that provide support to facilitate the electronic exchange of health information between providers. These and other ongoing federal health information exchange programs aim to increase the connections and exchanges of medical information directly between provider EHR systems so that patient medical information is available where and when it is needed. None of these existing programs include the use of electronically readable cards to store or exchange medical information. Using electronically readable cards to store and exchange beneficiary medical information would introduce an additional medium to supplement health information exchange among EHR systems, with beneficiaries serving as intermediaries in the exchange. Stakeholders noted that implementing another medium, such as a card, that stores beneficiary medical information outside of provider EHR systems could lead to inconsistencies with provider records. Stakeholders, including a health care IT vendor and a provider organization, stated that storing beneficiary medical information on beneficiary cards in addition to EHR systems could lead to problems with ensuring that medical information is synchronized and current. For example, beneficiaries who have laboratory tests performed after medical encounters would not have a means to upload the results to their cards before visiting their providers again, leading to cards that are not synchronized with provider records. Several stakeholders also stated that using electronically readable cards to store and exchange medical information would likely face similar interoperability issues encountered by federal health exchange programs and providers implementing EHR systems. Information that is electronically exchanged among providers must adhere to the same standards in order to be interpreted and used in EHRs. We previously found that insufficient standards for electronic health information exchange have been cited by providers and other stakeholders as a key challenge for health information exchange. For example, we found that insufficient standards for classifying and coding patient allergy information in EHRs could potentially limit providers’ ability to exchange and use such information. The use of electronically readable cards would involve exchanging medical information through an additional medium, but it would also be subject to the same interoperability issues that currently limit exchange. Despite potential challenges using electronically readable cards to store and exchange medical information, several stakeholders noted that adding patient health information to an electronically readable card may have benefits such as better health outcomes in emergency medical situations. For example, a beneficiary card containing medical information could be used by an emergency care provider to access important information that might otherwise be unknown, such as beneficiary allergy information. One potential benefit of electronically readable provider cards is that they could provide an option to authenticate providers accessing EHR systems, especially for remote online access. EHR systems that store patient medical information can be accessed from places outside the clinical setting, and there are concerns regarding the current level of identity authentication to ensure that only authorized providers are accessing the systems remotely. Although no determinations have been made regarding what specific authentication practices are needed, or what types of technology should be used for remote access, an HHS advisory committee has recommended that the Medicare EHR program implement rules regarding how providers should be authenticated when remotely accessing EHR systems. According to an electronically readable card industry organization, electronically readable cards could be used to authenticate providers remotely accessing EHR systems. Using electronically readable cards to convey identity and insurance information to auto-populate and retrieve information from provider IT systems could reduce errors in the reimbursement process and improve medical record keeping and health information exchange. Many providers currently capture identity and insurance information by photocopying insurance cards and manually entering beneficiary information into their IT systems, which can lead to data entry errors. In addition, providers have different practices for entering beneficiary names, such as different practices for recording names with apostrophes and hyphens, or may use beneficiary nicknames, leading to possible naming inconsistencies for a single individual. The failure to initially collect accurate beneficiary identity and insurance information when providers enter patient information into their IT systems, or retrieve information on existing beneficiaries, can compromise subsequent administrative processes. According to stakeholders, using an electronically readable card to standardize the process of collecting beneficiary identity and insurance information could help reduce errors in the reimbursement process. When beneficiaries’ identity or insurance information is inaccurate, insurers reject claims for those beneficiaries. Providers then must determine why the claims have been rejected, and reimbursements are delayed until issues with the claims are addressed and the claims are resubmitted. Once any issues are addressed, insurers reprocess resubmitted claims. Based on data provided by CMS, we found that up to 44 percent of the more than 70 million Medicare claims that CMS rejected between January 1, 2014, and September 29, 2014, may have been rejected because of invalid or incorrect beneficiary identity and insurance information that could be obtained from beneficiaries’ Medicare cards. In addition, HHS has cited an industry study indicating that, industrywide, a significant percentage of denied health insurance claims are due to providers submitting incorrect patient information to insurers. However, CMS officials stated that using electronically readable cards may not necessarily reduce claim rejections because providers may still obtain beneficiary information in other ways, including over the telephone or paper forms that have been filled out by beneficiaries. Stakeholders also told us that problems with collecting beneficiary information can lead to the creation of medical records that are not linked accurately to beneficiaries or records that are linked to the wrong individual, which can lead to clinical inefficiencies and potentially compromise patient safety. For example, problems collecting beneficiary information can prevent providers from retrieving existing beneficiary records from their IT systems, leading providers to create duplicate medical record files that are not matched to existing beneficiary records. Medical records that are not accurately linked to beneficiaries can compromise a provider’s ability to make clinical decisions based on complete and accurate medical records, which can lead to repeat and unnecessary medical tests and services, and adverse events, such as adverse drug interactions. Furthermore, inaccurate and inconsistent beneficiary records can also limit electronic health information exchange by limiting the ability to match records among providers. We previously found that difficulty matching beneficiaries to their health records has been a key challenge for electronic health information exchange, and this can lead to beneficiaries being matched to the wrong set of records, and to providers needing to match records manually. VA also recently issued new paper cards to certain veterans to obtain care outside of VA facilities. See the Veterans Access, Choice and Accountability Act of 2014, Pub. L. No. 113-146, § 101(f),128 Stat. 1754, 1760 (codified at 38 U.S.C. § 1701 note). beneficiary identity and insurance information prior to appointments, through either telephone conversations or online portals to preregister for appointments. This practice of ensuring the accuracy of beneficiary information prior to appointments may limit the possible benefits of using electronically readable cards to convey information at the point of care. CMS would need to update its claims processing systems to use electronically readable cards to authenticate beneficiary and provider presence at the point of care, while using the cards to convey beneficiary identity and insurance information might not require CMS to make IT updates. Similarly, using electronically readable cards for authentication would require updates to CMS’s current card management processes, while using the cards to convey beneficiary identity and insurance might not. For all potential uses of electronically readable cards, Medicare providers could incur costs and face challenges updating their IT systems to read and use information from the cards. Using electronically readable cards to authenticate beneficiaries and providers would require updates to CMS’s claims processing systems to verify that the cards were swiped at the point of care. CMS officials told us they have not fully studied the specific IT updates that would be needed to the claims processing system and could not provide an estimate of costs associated with implementing any updates. However, they noted that any IT updates would necessitate additional funding and time to implement, and could involve IT challenges. Based on our research, we identified two options for how CMS could verify that the cards were swiped by beneficiaries and providers at the point of care. The first option is based on proposals from an HHS advisory organization and a smart card industry organization. When beneficiaries and providers swipe their cards, CMS’s IT systems would generate and transmit unique transaction codes to providers. Providers would include the transaction codes on their claims. When processing claims, CMS would match the original transaction codes generated by CMS’s IT systems with the codes on submitted claims.For this option, CMS officials told us that they would need to implement an IT system to collect and store data on the transaction codes and build electronic connections with existing claims processing systems to match the codes with submitted claims. The second option is based on the processes used in a CMS pilot program. When beneficiaries and providers swipe their cards, information about the card transaction—such as the date of the transaction and the beneficiary Medicare number and provider NPI associated with the cards—would be sent to CMS. CMS would match this information about the card transaction with information on the claims submitted by the providers. According to officials, this option would similarly involve implementing an IT system to collect and store data on the card transactions and connecting the system with existing claims processing systems to match information about the transactions with submitted claims. CMS officials told us that verifying that beneficiary and provider cards were swiped by including new content on claims—such as unique transaction codes—would be problematic. Doing so would involve changes to industrywide standards for claim submission and the way in which CMS’s IT systems receive submitted claims. These industrywide standards govern the data content and format for electronic health care transactions, including claim submission. Adding new content to claims, such as a field for a transaction code, would require CMS to seek changes to existing claim standards with the standard-setting body responsible for overseeing the data content and format for electronic health care transactions. Officials told us that requesting and having such changes approved could take several years. CMS officials further noted that the IT infrastructure that CMS developed to accept electronic claim submissions was built to accept claims based on current standards and would need to be updated to accept any new content fields. However, under the second option, verifying that the cards were swiped by matching information about the card transaction—such as the date and beneficiary and provider identification information—with information on the claims submitted would not involve additional content on claims because CMS would be matching the card transactions with information currently included on claims. See GAO, Information Security: Advances and Remaining Challenges to Adoption of Public Key Infrastructure Technology, GAO-01-277 (Washington, D.C.: Feb. 26, 2001). federal agencies, told us that CMS could leverage such services to use PKI for electronically readable Medicare cards. CMS officials stated that CMS has not studied this issue and said they could not provide any cost estimates for using PKI for electronically readable Medicare cards. In contrast to using electronically readable cards for authentication, using the cards to convey beneficiary identity and insurance information may not require updates to CMS’s IT systems. Using the cards to convey such information primarily involves transferring information from the card to provider IT systems, as opposed to interacting with CMS IT systems. However, CMS officials said if any additional identity or insurance information is put on an electronically readable card that requires changes to the content or formatting of claims, CMS would have to update its claims processing systems. CMS would need to update and obtain additional resources for its current card management processes to use electronically readable cards to achieve a higher level of authentication for beneficiaries and providers. Card management processes involve procedures for enrollment, issuing cards, replacing cards, updating information on cards, deactivating cards, and addressing cardholder issues, among other processes; and developing standards and procedures for card use. Medicare currently does not issue cards to providers, and therefore CMS would need to implement a new program to issue and manage provider cards and to develop standards and procedures for card use. In addition, we found that new standards and procedures for card use would likely need to be developed to implement electronically readable cards to authenticate beneficiaries and providers. Proponents have suggested that NIST standards for electronically readable cards could be used to implement such cards for Medicare. However, these standards generally apply to the issuance and use of smart cards by federal employees and contractors for accessing computers and physical locations, and we found that the application of such standards could present logistical challenges for Medicare and could entail changes to current Medicare card management practices. For example, NIST standards involve procedures for verifying the identities of individuals before they are issued cards and, among other requirements, require potential cardholders to appear in person before being issued a card. Medicare does not require beneficiaries to appear in person to be enrolled in the program and issued cards. Doing so could present barriers to beneficiary enrollment and could present logistical challenges, given that Medicare covered approximately 54 million beneficiaries in 2014 and CMS does not have an infrastructure in place to meet beneficiaries in person. Additionally, to use the cards with a PKI system, CMS would need to implement processes to update and reissue beneficiary cards as needed to meet security requirements. Currently, the NIST standards require cards to be reissued every 6 years to update the PKI keys on the cards. Reissuing cards on a regular basis would likely require the implementation of new card management processes and additional resources for CMS. As of now, CMS only reissues cards if they are reported as lost, stolen, or damaged, or if there is a change to beneficiary information, such as a name change. CMS would face additional card management challenges and practical concerns to use electronically readable cards in conjunction with a PIN or biometric information. According to CMS officials, implementing PINs or biometrics would come with large costs and would involve significant changes for CMS and beneficiaries. To use PINs, CMS would need to implement processes for creating, managing, and verifying them. CMS officials and other stakeholders also noted that certain Medicare beneficiaries, especially those with cognitive impairments, may not be able to remember their PINs. Officials we spoke with in France told us that they decided not to have beneficiaries use PINs with their cards after a pilot project found that some beneficiaries had difficulties remembering them. In terms of using biometrics, CMS officials and other stakeholders expressed concerns regarding beneficiaries’ willingness to provide biometric information due to privacy considerations and the logistics involved in collecting such information from beneficiaries. Both France and Germany are currently issuing cards that include photographs of beneficiaries, and officials from both countries told us that they experienced difficulties collecting them. Both countries allow beneficiaries to submit their photographs by mail, and Germany allows beneficiaries to submit their photographs online.because the pictures are not taken in person, there are few controls in place to ensure that beneficiaries submit a representative photograph of themselves. VA includes a photograph of the veteran on its cards, which it generally obtains in person at local medical centers. CMS does not have an infrastructure like VA to take photographs of Medicare beneficiaries. Officials from Germany noted that CMS would need to implement processes for securing information on electronically readable cards to use them to store and exchange beneficiary medical information. CMS and ONC officials and other stakeholders expressed concerns about storing individually identifiable health information on the cards and told us that beneficiaries would likely be sensitive to having their medical information on the cards, so the security processes in place to protect this information would need to be rigorous. In particular, processes would be needed for accessing and writing information onto the cards to ensure that beneficiaries could control who could view stored information and to ensure that only legitimate providers are able to access information from or write information onto the cards. In contrast with using electronically readable cards for authentication or to store and exchange beneficiary medical information, we found that CMS would not necessarily need to make changes to current standards and procedures for the cards to electronically convey beneficiary identity and insurance information. The cards would not be used in a significantly different way than they are now—to convey information that providers use to verify beneficiary eligibility and to submit claims—and accordingly, little would change other than the type of card CMS issues. Instead of a paper card, CMS would need to produce and issue an electronically readable card.in the United States has been limited, there are existing industry standards for using such cards to convey identity and insurance information. An HHS advisory organization, the Workgroup for Electronic Data Interchange (WEDI), has issued formatting and terminology Although the use of electronically readable health insurance cards standards for using electronically readable cards that could be applied to electronically readable Medicare cards. CMS officials also noted that the implementation of electronically readable cards would require beneficiary and provider education and outreach regarding the new cards and any associated changes related to how the cards are used. For example, CMS would have to disseminate information on the different functions and features of any card and information on what to do if the electronically readable functions of the card are not working. For cases where IT systems malfunctioned or IT access was an issue, CMS officials stated the agency would need to have support services in place for providers and beneficiaries, and paper back- up options. For all potential uses of electronically readable cards, Medicare providers could incur costs and face challenges updating their IT systems to read and use information from the cards. For providers to use electronically readable cards, they would need to have hardware, such as card readers, to read information from the cards. According to stakeholders, including provider organizations, health care IT, transaction standards, billing, and management organizations, and health care IT vendors, in general, providers would also need to update their existing IT system software to use the information on cards. For example, to use electronically readable cards to store and exchange beneficiary medical information, providers’ EHR systems would need to be updated to be able to read and use the medical information on the cards. Generally, providers would have to update their existing IT systems with a type of software called middleware to interact with and use information from electronically readable cards, and such updates could involve significant challenges.provider IT systems, including billing systems and EHRs, vary widely and often are customized to meet the needs of individual providers. While some providers have a single, integrated IT system for billing, tracking According to stakeholders we spoke with, patient medical information, and other administrative applications, other providers have individual systems for each application, such as practice management, billing, and EHR systems. Because of the variety and customization of systems in place, providers may need to implement uniquely developed middleware for each software system the cards would interact with to ensure that their IT systems could read and use information from the cards. Updating provider IT systems to use electronically readable cards for beneficiary and provider authentication by including transaction codes on claims could prove particularly challenging. To do so, the cards would need to be able to interact with provider IT systems used for billing so that the systems could incorporate the transaction codes generated by the cards onto provider claim forms. Stakeholders told us that current provider IT systems are not designed to interact with electronically readable cards to incorporate transaction codes generated by the cards onto claims. Additionally, they said that provider billing practices vary widely, which presents challenges for developing standard ways to update provider IT systems to be able to perform this function. For example, some providers have IT systems capable of directly billing CMS, while others use IT systems that electronically transmit clinical encounter information to third-party billers, who generate and submit claims to CMS. Some providers do not use IT systems, and submit paper claims or clinical encounter information to clearinghouses, which convert the claims into electronic format and submit them to CMS. If information about the card transaction is sent directly to CMS—and no transaction codes are included on claims—providers would not necessarily need to update their existing IT software. In CMS’s 2011 and 2012 electronically readable card pilot program, participating physicians and suppliers did not need to update their IT systems, as they used magnetic stripe cards and sent the information to CMS using existing credit card readers and networks. However, if CMS used smart cards with PKI for authentication rather than magnetic stripe cards and credit card readers, providers would likely need to purchase card readers and software capable of authenticating the cards. While some provider IT systems would need to be updated with middleware to be able to use beneficiary identity and insurance information conveyed by electronically readable cards, some provider systems already have this capability. One vendor noted that its IT systems are capable of using beneficiary identity and insurance information from cards that comply with WEDI electronically readable card standards to auto-populate and retrieve information from their IT systems. In addition, an insurer that issues electronically readable cards that comply with WEDI standards told us that there are providers that currently use its cards to auto-populate information into their IT systems, though this insurer could not estimate the percentage of providers who do so. In addition to updating IT systems, CMS officials and stakeholders also expressed concerns regarding how using electronically readable cards to authenticate providers at the point of care would be incorporated into provider workflows. During the pilot program conducted by CMS, participating providers told CMS that using the cards was an administrative burden that required changes to their workflows. Stakeholders noted that it might not be practical for providers to swipe the cards during the course of providing care and that the cards might instead be used by administrative or billing staff. However, having administrative staff use provider cards could create complexity in terms of card use and limits the ability of the card to be used to authenticate provider presence at the point of care. For some providers, administrative and billing processes might not take place at the same location where care is provided. Stakeholders also expressed logistical concerns regarding when and how beneficiary and provider cards would be swiped at the point of care. At larger provider facilities, such as hospitals, having beneficiaries and providers swipe their cards at the point of care might require providers to deploy many card readers within a single facility. Additionally, stakeholders expressed concerns regarding how the cards would be used when multiple providers provide care during a single medical encounter. For example, a beneficiary experiencing a medical emergency may be provided care by an ambulance company, hospital, and attending physicians. With each provider submitting its own claim for reimbursement, it raises questions regarding how a single swipe of the beneficiary’s card would be matched to each of the claims submitted by the providers. Further, stakeholders raised questions regarding how the cards would be used by providers that may have little contact with beneficiaries, such as laboratories. Many stakeholders also cited potential challenges encouraging providers to incur costs to purchase hardware and update their IT systems to use the cards, especially given existing CMS IT requirements. Officials at CMS and ONC, along with stakeholders, noted that Medicare providers are already investing resources, and facing IT challenges, to meet Medicare EHR Incentive Program requirements and to update their IT Both France and Germany have systems to adopt new billing codes.experienced similar challenges with provider reluctance to incur costs to use electronically readable cards. According to officials from organizations we spoke with in those countries, financial subsidies to purchase hardware and update IT systems, and financial incentives for card use have been key to encouraging provider participation. France and Germany have each successfully implemented an electronically readable card system—specifically, a smart card system— on a national scale in their health care systems. The implementation of these systems provides lessons that could inform U.S. policymakers in deciding whether to adopt an electronically readable card for Medicare. Both countries’ experiences demonstrate that implementation of an electronically readable card would likely be a long process and would require that competing stakeholder needs be discussed and addressed. Further, the experiences of France and Germany illustrate that after implementation, management of an electronically readable card system is a continuing and costly process. France and Germany’s successful implementation of an electronically readable card system demonstrates that implementation of such a system on a national scale is possible. According to the organization that manages the smart card system in France, 50 million citizens, or about 76 percent of the population in France, used a beneficiary card and more than 300,000 health care providers used a health care provider card as part of a health care service in 2013. Approximately 90 percent of France’s health care claims were generated by swiping both a beneficiary and a health care provider smart card. In Germany, approximately 70 million citizens, or about 85 percent of the population, used a smart card provided to beneficiaries as their health insurance card in 2014, according to government officials. The experiences of both countries also demonstrate that the implementation of an electronically readable card system can be a long process. France has had a smart card system for beneficiaries and health care providers since 1998. Officials from the organization that manages the smart card system in France told us that implementation of the system had been a slow process in part because many providers lacked the IT equipment—such as computers and printers—needed to manage their health care practices and had to obtain that equipment before being able to participate in the card system. Health care providers’ resistance to voluntarily adopting and using the smart cards—despite financial incentives to do so—also contributed to the delay in implementing the smart card system fully. Fourteen years after the implementation of the smart card system in France, about 95 percent of self-employed health care providers and 18 percent of hospital-based providers in France were using health care provider cards. While the initial cards for beneficiaries were distributed in 2 to 3 years, according to French officials, issuance of an updated beneficiary card with a picture has been a slower process. French officials explained that the process of adding a photograph to the beneficiary card and issuing the updated cards has been ongoing since 2007. As of September 2014, 35 percent of beneficiary cards in France being used for health care had been issued 15 years ago, according to the organization that represents health care insurers. In 1995, Germany implemented a memory-only smart card that included information such as name, address, and insurance status. The card was used to electronically transfer this information to the health care providers’ IT systems. According to a report by the German auditing agency, in 2003 Germany required that a new smart card containing a microprocessor chip and with the capability to add new functionality be implemented by January 2006. This report also indicated that due to technical problems and stakeholder disagreements, the initial roll out of the new cards did not occur until October 2011. By the end of 2013, almost all of the population insured through the statutory health insurance system had been issued the new cards and providers were equipped with the readers that could access information from both the new smart card and the previous memory-only smart card. However, German officials told us that the full transition to the new cards will not be complete until early 2015, when beneficiaries will no longer be able to use the memory-only cards. Currently, the new smart cards are being used in the same way as the memory-only card. According to officials in Germany, new applications will be added to the new card incrementally, with the ability to update insurance information on the card being the first application and then an expansion to storing emergency care information, such as allergies and any drug interactions. Officials explained that full implementation of the new smart card—with all of the applications added—will not be completed until 2018, more than 10 years later than mandated. The initial implementation of any new card system in Medicare could also be a lengthy process because CMS would need time to address the challenges that we described earlier. Similarly, experiences in both France and Germany have illustrated that updating a card system has the potential to be as lengthy and resource-intensive a process as the initial implementation. French officials noted that being clear about how an electronically readable card will be used and developing a system that can be easily updated are key lessons that the Medicare program should consider. Officials in France and Germany indicated that their governments implemented smart card systems to simplify and improve administrative processes in their health care systems. Specifically, both countries implemented a smart card as a means to move from a paper-based to an electronic billing and reimbursement process. In addition to administrative improvements, officials from both countries noted that the shift from paper to electronic billing and reimbursement has resulted in financial savings. For example, government officials in France told us that the estimated cost to process a paper claim is $2.40 per claim, while processing an electronic claim cost $0.20. Officials from France’s federal auditing agency claim that the cards have been largely successful, with 93 percent of claims being submitted electronically in 2014, resulting in an estimated savings of approximately $1.5 billion per year. However, according to officials from the organization that manages the beneficiary card system in France, it is difficult to isolate how much of that savings can be attributed specifically to the use of the smart cards, given that electronic billing and reimbursement could have been achieved by using technology other than an electronically readable card. German officials also reported, but did not quantify, savings associated with using smart cards to move to an electronic billing and reimbursement process. The cost savings that France and Germany report from moving to electronic billing would not necessarily be achievable for Medicare, which has a long-standing electronic claims processing system that enables both Medicare and health care providers to process claims faster and at a lower cost. Some health care providers have been submitting claims electronically since 1981, and by law Medicare has been prohibited from paying claims not submitted electronically since October 16, 2003, with limited exceptions. French and German government officials told us that it is important to ensure that the competing needs of stakeholders are discussed and addressed. Officials also stated that in their experience this part of the process generally required a significant time investment and should occur prior to the decision to implement any electronically readable card. For instance, officials from provider organizations in Germany told us that health care providers took issue with what they viewed as a continued emphasis on enhancing the administrative, rather than the clinical, features of the card. Officials explained that providers and hospitals had objected to the decision to add the ability to electronically update identity and insurance information before adding the ability to store emergency care information on the new smart card. They stated that the new smart card is currently being used the same way as the memory-only smart card—to electronically transfer a beneficiary’s identity and insurance information to the health care providers’ IT system—which provides no new benefits for providers relative to the memory-only smart cards. In both France and Germany, the government established independent organizations to address stakeholders’ needs. For example, officials from the independent organization in Germany told us that it has seven stakeholder groups, including the National Association of Statutory Health Insurance Funds as the sole representative of all health insurance funds and six umbrella organizations representing health care providers. Officials explained that each group is assigned a different share of interest in the organization, with the stakeholder group that funds the organization holding a 50 percent share. An organization like those established in France and Germany may not be necessary to solicit input from stakeholders in the United States. However, successful implementation of an electronically readable card system for the Medicare program would depend on stakeholder participation. An official from a health care billing and management organization told us that before implementation of any electronically readable cards for Medicare, CMS should obtain input from beneficiary and consumer advocacy groups on how the cards should be implemented. This official also told us that CMS would need to educate beneficiary and provider groups on the benefits of electronically readable cards and how to use them because beneficiary and provider buy-in would help CMS in implementing the cards. CMS officials confirmed that implementing an electronically readable card could result in a number of policy challenges that may cause resistance from provider and beneficiary advocacy organizations. CMS officials acknowledged that the agency would have to work with multiple stakeholders who have competing priorities if they were to move forward with the development and implementation of an electronically readable card. Furthermore, implementing an electronically readable card system for Medicare would be done in a different health IT landscape than France’s and Germany’s. Officials in both France and Germany told us that they began implementing their systems when health care providers’ use of IT systems was limited. However, in the United States, health IT is more advanced than it was in France and Germany when they first implemented the electronically readable cards. Nevertheless, according to officials from a U.S. health insurer, the disparate IT systems of health care providers in the United States will need to be modified in order to implement an electronically readable card system. French officials noted that implementation is easier when the electronically readable card system does not have to be built on top of existing hardware and software. Management of an electronically readable card system includes maintaining the technical infrastructure as well as continuously producing and issuing the cards. Officials from France and Germany reported that the process of managing an electronically readable card system is costly and needs to be taken into account when deciding whether to implement such a system. The independent organizations that are responsible for addressing stakeholders’ needs related to the card systems in France and Germany also have an ongoing role in managing these systems. In France, an additional organization manages the health care provider card system. (See table 1.) Officials in both France and Germany told us that they experienced significant costs related to managing the system beyond initial implementation costs. For example, in France, government officials explained that it costs about $37 million annually to maintain the infrastructure for the beneficiary card and nearly $31 million per year in IT and human resources costs for the provider card. In addition, there are annual costs to produce, issue, and deactivate the cards. In France, for instance, the cost to produce and issue beneficiary cards is approximately $2.50 per card, and production and issuance costs for provider cards range from about $8 to $12 per card, depending on the method used to mail the card. In Germany, the National Association of Statutory Health Insurance Funds finances the organization that manages the technical infrastructure for the card system, though the individual insurance funds are responsible for producing and issuing the beneficiary smart cards. Officials from this organization told us that they are paid about $2.40 per beneficiary annually for the development of the infrastructure. In 2014, there were approximately 70 million beneficiaries using the electronically readable cards in Germany, which equates to about $168 million in development costs. U.S. policymakers would need to determine the extent to which CMS or other organizations would be responsible for the implementation and management of an electronically readable system for Medicare. Some of the responsibilities that the French and German organizations address, such as certifying the software, are currently being addressed by another agency within HHS.the appropriate agencies or organizations that should be involved in developing and implementing such a system. As consideration is given to whether to increase the functionality of the current Medicare beneficiary card, and whether to implement cards for providers, the planned use of the cards will guide the type of card technology that is needed. The planned use of the cards will also prompt additional discussions regarding card management processes and standards, including whether use would be mandatory, whether PINs or biometric factors would be used in addition to the cards, whether enrollment and card issuance processes would need to be updated, and what information would be stored on the card. We found that electronically readable cards would have a limited effect on program integrity, but could aid administrative processes. Ultimately, a decision about whether to implement an electronically readable card will rest upon a determination regarding the costs and benefits of electronically readable cards compared to the current paper card or other strategies and solutions. The success of any electronically readable card system will also depend on participation from health care providers, and therefore any planned use will need to take provider costs and potential challenges into consideration. Finally, as demonstrated by the experiences in France and Germany with smart cards, implementing and maintaining an electronically readable Medicare card system would likely require considerable time and effort. We provided a draft of this report to HHS for comment. HHS provided technical comments, which we incorporated as appropriate. In addition, we obtained comments from officials from the Smart Card Alliance, an organization that represents the smart card industry. The officials emphasized the greater capability of smart cards to authenticate transactions and secure information on the cards than other electronically readable card options. Smart Card Alliance officials commented that the way in which CMS has indicated that it would implement electronically readable cards in Medicare would diminish the cards’ potential to limit fraud. Further, the officials commented that we underestimated the potential of electronically readable cards to further CMS’s program integrity efforts, particularly CMS’s ability to identify potential fraud through postpayment claims analysis. The officials said that CMS could have greater assurance in the legitimacy of claims associated with card use and that the agency could better focus its analysis on claims in which cards were not used. Finally, the officials commented that possible challenges applying NIST standards for using electronically readable cards in Medicare should not preclude card implementation because standards that better align with the needs of the program could be developed. We believe that our report accurately characterizes the potential effects of electronically readable cards on Medicare program integrity efforts, though we modified several statements to improve clarity. We also incorporated the Alliance’s technical comments as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, the National Coordinator for Health Information Technology, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To examine the potential benefits and limitations associated with the use of electronically readable cards in Medicare and the steps CMS and Medicare providers would need to take to implement and use electronically readable cards, we interviewed officials from the agencies and organizations listed in table 2. Several European countries, including France and Germany, use electronically readable cards for health care purposes, such as transferring identity and insurance information electronically from the card to a health care provider’s IT system. France and Germany have long- standing experience with the use of such cards. As part of our research on the potential use of electronically readable cards in Medicare, we visited France and Germany to learn about how they developed and used the cards. This appendix provides information on each country’s health care system, and how electronically readable cards are used within that system. Health care coverage in France has been universal since 2000. All residents may receive publicly financed health care through noncompetitive health insurance funds (commonly referred to as statutory health insurance funds)—six entities whose membership is based on the occupation of the individual. Specifically, eligibility to receive statutory health insurance is granted either through employment (to salaried or self-employed working persons and their families) or as a benefit to persons (and their families) who have lost their jobs to students, and to retired persons. The state covers the health insurance costs of residents not eligible for statutory health insurance, such as unemployed persons. The French system of health insurance is composed of two tiers. The first tier provides basic coverage through the statutory health insurance funds, which cover about 75 percent of household medical expenses. The statutory health insurance coverage includes hospital care and treatment in public or private rehabilitation; outpatient care provided by general practitioners, specialists, dentists, and midwives; and prescription drugs. The second tier consists of complementary and supplementary voluntary health insurance coverage provided by mutual (not-for-profit) or private insurers that pay for services not covered by statutory health insurance. France’s health care system uses two electronically readable cards—a beneficiary card and a health care provider card—as part of its billing and reimbursement processes; both are smart cards. Generally, beneficiaries make payment to the health care provider when services are delivered, and the health insurance funds reimburse the beneficiary. In certain circumstances, such as when services are provided by pharmacists and radiologists, third-party payment or reimbursement directly to the health care provider is used. When services are provided, the beneficiary and the health care provider both insert their cards into a two-card reader at the point of service. The software enables the health care provider to enter medical consultation information into the provider’s IT system. That information is used to generate an electronic health claim form, which is sent to the statutory health insurance fund and the supplementary voluntary health insurance fund for payment to either the beneficiary or the health care provider. (See fig. 2.) Health insurance has been mandatory for all citizens and permanent There are two primary sources of residents of Germany since 2009.health insurance in Germany—the publicly financed health insurance (commonly referred to as the statutory health insurance system) and the private health insurance system.system, which covered about 86 percent of the population in 2013, health insurance is generally provided by competing, not-for-profit, nongovernmental health insurance funds (called “sickness funds”). As of January 2013, there were 134 sickness funds operating under the statutory health insurance system. Under the statutory health insurance All employed citizens earning less than $4,874 per month ($70,489 per year) as of 2013 are covered by the statutory health insurance system, Individuals and they and their dependents are covered without charge. whose gross wages exceed the threshold, civil servants, and those who are self-employed can choose to participate in statutory health insurance or purchase private health insurance, which covered about 11 percent of the population in 2013. Statutory health insurance coverage includes preventive services, inpatient and outpatient hospital care, physician services, prescription drugs and sick leave compensation. Private health insurance covers minor benefits not covered by statutory health insurance, access to better amenities, and some copayments (e.g., for dental care). Germany first introduced a beneficiary, memory-only health insurance smart card in 1995. German citizens who were members of a public, statutory health insurance fund were issued the memory-only card, which contained beneficiary insurance information. This card was used to electronically transfer the information stored on the card to health care providers’ IT systems. More recently, Germany initiated a project to modernize its health care system with the introduction of a secure network infrastructure. Part of this project included updating the beneficiary smart card with a card that has the capability to store and process information. In 2011, Germany began issuing the updated smart card, which contains the same information as the memory-only card and is currently being used in the same way, which is to auto-populate health providers’ IT systems. According to German officials, new applications will be added incrementally to the updated smart card, with the card eventually being used to access and update online beneficiary health insurance information and exchange beneficiary medical information. As of September 2014, officials told us that all applications will not be added until 2018. Kathleen M. King, (202) 512-7114 or [email protected]. In addition to the contact named above, Lori Achman, Assistant Director; George Bogart; Michael Erhardt; Deitra Lee; Elizabeth T. Morrison; Vikki Porter; Maria Stattel; and Kate Tussey made key contributions to this report.
|
Proposals have been put forward to replace the current paper Medicare cards, which display beneficiaries' Social Security numbers, with electronically readable cards, and to issue electronically readable cards to providers as well. Electronically readable cards include cards with magnetic stripes and bar codes and “smart” cards that can process data. Proponents of such cards suggest that their use would bring a number of benefits to the program and Medicare providers, including reducing fraud through the authentication of beneficiary and provider identity at the point of care, furthering electronic health information exchange, and improving provider record keeping and reimbursement processes. GAO was asked to review the ways in which electronically readable cards could be used for Medicare. This report (1) evaluates the different functions and features of electronically readable cards, (2) examines the potential benefits and limitations associated with the use of electronically readable cards in Medicare, (3) examines the steps CMS and Medicare providers would need to take to implement and use electronically readable cards, and (4) describes the lessons learned from the implementation and use of electronically readable cards in other countries. To do this, GAO reviewed documents, interviewed stakeholders, and conducted visits to two countries with electronically readable card systems. The Centers for Medicare & Medicaid Services (CMS)—the agency that administers Medicare—could use electronically readable cards in Medicare for a number of different purposes. Three key uses include authenticating beneficiary and provider presence at the point of care, electronically exchanging beneficiary medical information, and electronically conveying beneficiary identity and insurance information to providers. The type of electronically readable card that would be most appropriate depends on how the cards would be used. Smart cards could provide substantially more rigorous authentication than cards with magnetic stripes or bar codes, and provide greater security and storage capacity for exchanging medical information. All electronically readable cards could be used to convey beneficiary identity and insurance information since they all have adequate storage capacity to contain such information. Using electronically readable cards to authenticate beneficiary and provider presence at the point of care could curtail certain types of Medicare fraud, but would have limited effect since CMS officials stated that Medicare would continue to pay claims regardless of whether a card was used due to legitimate reasons why a card may not be present. CMS officials and stakeholders told us that claims should still be paid even when cards are not used because they would not want to limit beneficiaries' access to care. Using electronically readable cards to exchange medical information is not part of current federal efforts to facilitate health information exchange and, if used to supplement current efforts, it would likely involve challenges with interoperability and ensuring consistency with provider records. Using electronically readable cards to convey identity and insurance information to auto-populate and retrieve information from provider information technology (IT) systems could reduce reimbursement errors and improve medical record keeping. To use electronically readable cards to authenticate beneficiaries and providers, CMS would need to update its claims processing systems to verify that the cards were swiped at the point of care. CMS would also need to update its current card management processes, including issuing provider cards and developing standards and procedures for card use. Conversely, using the cards to convey beneficiary identity and insurance information might not require updates to CMS's IT systems or card management practices. For all potential uses, Medicare providers could incur costs and face challenges updating their IT systems to use the cards. The experiences of France and Germany demonstrate that an electronically readable card system can be implemented on a national scale, though implementation took years in both countries. It is unclear if the cost savings reported by both countries would be achievable for Medicare since the savings resulted from using the cards to implement electronic billing, which Medicare already uses. Both countries have processes in place to manage competing stakeholder needs and oversee the technical infrastructure needed for the cards. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate.
|
Since the 1970s, geostationary satellites have been used by the United States to provide meteorological data for weather observation, research, and forecasting. NOAA’s National Environmental Satellite, Data, and Information Service is responsible for managing the civilian operational geostationary satellite system, called GOES. Geostationary satellites can maintain a constant view of the earth from a high orbit of about 22,300 miles in space. NOAA operates GOES as a two-satellite system that is primarily focused on the United States (see fig. 1). These satellites provide timely environmental data about the earth’s atmosphere, surface, cloud cover, and the space environment to meteorologists and their audiences. They also observe the development of hazardous weather, such as hurricanes and severe thunderstorms, and track their movement and intensity to reduce or avoid major losses of property and life. The ability of the satellites to provide broad, continuously updated coverage of atmospheric conditions over land and oceans is important to NOAA’s weather forecasting operations. To provide continuous satellite coverage, NOAA acquires several satellites at a time as part of a series and launches new satellites every few years (see table 1). NOAA’s policy is to have two operational satellites and one backup satellite in orbit at all times. Three viable GOES satellites—GOES-13, GOES-14, and GOES-15—are currently in orbit.with GOES-13 covering the eastern United States (GOES-East in Figure 1) and GOES-15 covering the western United States (GOES- West). GOES-14 is currently in an on-orbit storage mode and is available as a backup for the other two satellites should they experience any degradation in service. The GOES-R series is the next generation of satellites that NOAA is planning. Both GOES-13 and GOES-15 are operational satellites, Each of the operational geostationary satellites continuously transmits raw environmental data to NOAA ground stations. The data are processed at these ground stations and transmitted back to the satellite for broadcast to primary weather services and the global research community in the United States and abroad. Raw and processed data are also distributed to users via ground stations through other communication channels, such as dedicated private communication lines and the Internet. Figure 2 depicts a generic data relay pattern from a geostationary satellite to the ground stations and commercial terminals. NOAA established the GOES-R program to develop and launch the next series of geostationary satellites and to ensure the continuity of geostationary satellite observations. The GOES-R satellite series is designed to improve upon the technology of the prior satellite series in terms of system and instrument improvements. NOAA expects that the GOES-R series will significantly increase the clarity and precision of the observed environmental data. In addition, the data generated by the satellites are to be both developed and transmitted more quickly. Since its inception, the GOES-R program has undergone several changes in cost and scope. As originally envisioned, GOES-R was to encompass four satellites hosting a variety of advanced technology instruments and providing 81 environmental products. The first two satellites in the series (called GOES-R and GOES-S) were expected to launch in September 2012 and April 2014. However, in September 2006, NOAA decided to reduce the scope and technical complexity of the GOES-R program because of expectations that total costs, which were originally estimated to be $6.2 billion, could reach $11.4 billion. Specifically, NOAA reduced the minimum number of satellites from four to two, cancelled plans for developing an advanced instrument (which reduced the number of planned satellite products from 81 to 68), and divided another instrument into two separate acquisitions. The agency estimated that the revised program would cost $7 billion and kept the planned launch dates unchanged. Subsequently, NOAA made several other important decisions about the cost and scope of the GOES-R program. In 2007, NOAA established a new program cost estimate of $7.67 billion and moved the launch dates for the first two satellites to December 2014 and April 2016. Further, to mitigate the risk that costs would rise, program officials decided to remove selected program requirements from the baseline program and treat them as contract options that could be exercised if funds allowed. These requirements included the number of products to be distributed, the time to deliver the remaining products (product latency), and how often these products would be updated with new satellite data (refresh rate). For example, program officials eliminated the requirement to develop and distribute 34 of the 68 envisioned products, including low cloud and fog, sulfur dioxide detection, and cloud liquid water. Program officials included the restoration of the requirements for the products, latency times, and refresh rates as options in the ground system contract that could be acquired at a later time. Program officials later reduced the number of products that could be restored as a contract option (called option 2) from 34 to 31 because they determined that two products were no longer feasible and two others could be combined into a single product. In late 2009, NOAA changed the launch dates for the first two satellites to October 2015 and February 2017. More recently, NOAA restored two satellites to the program’s baseline, making GOES-R a four-satellite program once again. In February 2011, as part of its fiscal year 2012 budget request, NOAA requested funding to begin development for two additional satellites in the GOES-R series—GOES-T and GOES-U. The program estimated that the development for all four satellites in the GOES-R series—GOES-R, GOES-S, GOES-T, and GOES-U—would cost $10.86 billion through 2036, an increase of $3.19 billion over its 2007 life cycle cost estimate of $7.67 billion for the two-satellite program. In August 2013, the program announced that it would delay the launch of the first two satellites in the program, due in part to the effects of sequestration. Specifically, the launch of the GOES-R satellite was delayed from October 2015 to the quarter ending March 2016, and the expected GOES-S satellite launch date was moved from February 2017 to the quarter ending June 2017. See table 2 for an overview of key changes to the GOES-R program. While NOAA is responsible for GOES-R program funding and overall mission success, it implemented an integrated program management structure with NASA for the GOES-R program since it relies on NASA’s acquisition experience and technical expertise. The NOAA-NASA Program Management Council is the oversight body for the GOES-R program, and is co-chaired by the NOAA Deputy Undersecretary for Operations and the NASA Associate Administrator. NOAA also located the program office at NASA’s Goddard Space Flight Center. The GOES-R program is divided into flight and ground projects that have separate areas of responsibility and oversee different sets of contracts. The flight project, which is led by NASA, includes instruments, spacecraft, launch services, satellite integration, and on-orbit satellite initialization. The ground project, which is led by NOAA, is made up of three main components: the core ground system, an infrastructure of antennas, and a product access subsystem. In turn, the core ground system comprises four functional modules supporting operations, product generation, product distribution, and configuration control. Figure 3 depicts the integrated program management structure and the organization of the flight and ground projects within that structure, while table 3 summarizes the GOES-R instruments and their planned capabilities and table 4 describes key components of the ground project. In recent years, we issued a series of reports aimed at addressing Key areas of focus included weaknesses in the GOES-R program.(1) cost, (2) technical challenges and changes in requirements, and (3) contingency plans. Addressing cost risks: In June 2012, we reported that the GOES-R program might not be able to ensure that it had adequate resources to cover unexpected problems in remaining development. We recommended the program strengthen its process for planning and reporting on reserves. More recently, in September 2013, we reported on weaknesses in the process for reporting reserves to management and recommended the agency take action to brief senior executives on a regular basis regarding the status of reserves. The agency agreed with our recommendations and took steps to address them by identifying needed reserve levels and providing a more detailed breakdown of reserve percentage calculations. However, NOAA is not yet identifying the reserves associated with each satellite in the series. Technical issues and changes in requirements: We previously reported on issues related to GOES technical challenges and requirements. In 2012, we reported that key instruments were experiencing technical challenges and required additional redesign efforts. For example, emissions for the Geostationary Lightning Also, the Mapper instrument were outside the specified range. ground project was experiencing ongoing technical problems—for example, the definition of ground system software requirements and integration of flight instruments. As a result, revisions were made to the Core Ground System’s baseline development plan and schedule. More recently, in September 2013, we reported that NOAA made changes to several of the GOES-R requirements—including decreasing the accuracy requirement for the hurricane intensity product and decreasing the timeliness of the lightning detection product—and that end users were concerned by many of these changes. We recommended that the program improve communications with users on changes in GOES-R requirements by assessing impacts, seeking input from users, and disseminating information on changes. NOAA agreed with this recommendation and took steps to explore further avenues of user communication such as customer forums and interagency working groups. Contingency planning: In February 2013, due to the importance of environmental satellite data and the potential for a gap in this data, we added mitigating weather satellite gaps to our biennial High-Risk list. GAO-12-576. In that report, we noted that NOAA had established a contingency plan for a potential gap in the GOES program, but it needed to demonstrate its progress in coordinating with the user community to determine their most critical requirements, conducting training and simulations for contingency operations scenarios, evaluating the status of viable foreign satellites, and working with the user community to account for differences in product coverage under contingency operations scenarios. We also stated that NOAA should update its contingency plan to provide more details on its contingency scenarios, associated time frames, and any preventative actions it is taking to minimize the possibility of a gap. More recently, in September 2013, we reported that, while NOAA had established contingency plans for the loss of the GOES satellites, these plans still did not address user concerns over potential reductions in capability, and did not identify alternative solutions and timelines for preventing a delay in the GOES-R launch date.recommended the agency revise the satellite and ground system contingency plans to address weaknesses, including providing more information on the potential impact of a satellite failure and coordinating with key external stakeholders on contingency strategies. The agency agreed with these recommendations and took steps to address them by identifying and refining program contingency plans. NASA and NOAA are following NASA’s standard space system life cycle on the GOES-R program. This life cycle includes distinct phases, including concept and technology development; preliminary design and technology completion; final design and fabrication; system assembly, integration and testing, and launch; and operations and sustainment. Key program reviews are to occur throughout each of the phases, including preliminary design review, critical design review, and system integration review. NOAA and NASA jointly conduct key reviews on the flight and ground segments individually as well as for the program as a whole, and then make decisions on whether to proceed to the next phase. Figure 4 provides an overview of the life cycle phases, key program reviews, and associated decision milestones. In addition, the key reviews are described in table 5. The GOES-R program has completed important steps in developing its first satellite. Specifically, the program completed its critical design review in November 2012, its Mission Operations Review in June 2014, and its System Integration Review in July 2014. Based on the results of the System Integration Review, in September 2014, NOAA and NASA decided to move the program to the next phase, the system assembly, integration and test, and launch and checkout phase. To prepare for the recent reviews and milestones, the program completed numerous important steps on both the flight and ground projects. Key accomplishments include: completing testing on individual components including the six flight instruments, the spacecraft core and system modules, and ground system components; releasing key ground system software components on enterprise infrastructure and mission management in December 2013 and April 2014, respectively, and completing the dry run of another ground system release that includes all planned products for the GOES-R satellite; replacing and successfully demonstrating a new engineering analysis tool, which will perform trending and offline analysis; completing installation and testing of antenna dishes at NOAA’s satellite operations facility, continuing installation and testing at NOAA’s primary satellite communications site, and beginning training in use of the antenna system; completing two key readiness reviews on the product distribution and access system; and completing connectivity tests throughout system hardware components. Moving forward, the next major program milestones are the flight operations review and the operational readiness review. In preparation for these milestones, the program plans to conduct a series of five end-to- end tests.the space and ground segments before the launch of the first satellite. NOAA’s estimated life cycle cost for the GOES-R program has held relatively steady. NOAA’s current life cycle cost estimate for the GOES-R program is $10.83 billion, which is slightly less than the $10.86 billion lifecycle cost estimate from August 2013. The $30 million change in the life cycle cost estimate from last year is the net result of moving selected management functions outside the program and addressing program commitments impacted by funding reductions associated with sequestration in late 2013. However, program data show that individual components are costing more than expected. Federal agencies and private industry organizations often implement earned value management (EVM) as a tool for ensuring work completed is on track with expected costs and schedules. management tool that, among other things, produces early warning signs of impending schedule slippages and cost overruns. Key EVM metrics include cost and schedule variances, which measure the value of work accomplished in a given period and compares it with the planned value of work scheduled for that period and with the actual cost of work accomplished. For example, an increase in cost variance means that the program spent more than expected to produce the work. An analysis of EVM data for three key components—the GOES ground system, the ABI instrument, and the GLM instrument—shows that each experienced a growing cost variance. Specifically, over the twelve-month period ending July 2014, the cost variance for the ground system increased to 8.4 percent of total cumulative budgeted cost and for GLM, the cost variance increased to 5.1 percent. For a third key component, the ABI instrument, cost variance increased slightly from 2.4 to 2.6 percent. Figures 6, 7, and 8 show monthly cost variance data for the three key components for the year ending July 2014. GAO-09-3SP. GAO-09-3SP. data for the GLM and ABI instruments we found inconsistencies in the contractor’s monthly and cumulative reports that made it more difficult for NOAA to effectively oversee the contractor’s performance. Specifically, we found inconsistencies between cumulative and monthly budget totals in contractor performance reports that ranged from hundreds of thousands to millions of dollars. For instance, the cumulative amount of budget allocated to complete work for GLM between February 2013 and March 2013 increased by just under $2 million, while the stated monthly change was $3.8 million. Also, the cumulative amount of budgeted work accomplished for ABI between May 2013 and June 2013 increased by $3.2 million, while the stated monthly change was $5.4 million. Month-to- month discrepancies such as this occurred in each of 6 months for the GLM instrument and each of 9 months for the ABI instrument in the period between August 2013 and July 2014. Program officials stated that these issues were addressed in later contractor reports, and that program analysts communicate with contractors and program management regularly to resolve any found discrepancies. However, more recent monthly reports continue to show discrepancies. If the instrument’s cost data are unreliable, it is difficult for managers and program officials to make financial projections and assess reserve needs and usage. The GOES-R program is considering eliminating or deferring planned functionality on its ground system due to issues experienced during development. Specifically, the GOES-R program is considering deferring functionality on the ground system in order to provide schedule relief in the case of further delays. Program and contracting officials recently revised the composition of the software releases it will be delivering on the ground system. In doing so, the GOES-R program identified “off- ramps,” or decision points, at which time they could remove or defer a specific function from pre-launch to post-launch if it is not ready in time for testing. As of September 2014, officials identified 50 potential decision points for deferring functionality. To date, the program has decided to implement five of the deferrals, including one to remove the ability to play back information from alternate ABI data sets outside the GOES ground system, and another to remove a low-level navigation capability. In addition, program officials decided against deferring 30 of the functions, leaving 16 deferrals that could be implemented in the future. The off- ramps still under consideration include a reduction in the amount of verification and validation activities that will be conducted. A key element of a successful test phase is appropriately identifying and handling any defects or anomalies that are discovered during testing. Key aspects of a sound defect management process include defect management planning, defect identification and classification, defect analysis, defect resolution, defect tracking, and defect trending and reporting. Leading industry and government organizations consider defect management and resolution to be among the primary goals of testing.These organizations have identified best practices for managing defects and anomalies that arise during system testing. Table 7 outlines the best practices of a sound defect management process. The GOES-R program has sound defect management policies in place and it, along with its contractors, is actively performing defect management activities. Specifically, the program has fully satisfied 13 of the 20 best practices, and partially satisfied the remaining 7 practices. For example, the program has defect management procedures in place as part of its overall testing program. Defect management is incorporated in the program’s mission assurance, configuration management, and verification/validation functions. In addition, for three key components we reviewed—ABI, GLM, and the ground system—defects are entered into automated systems, from which they are analyzed and resolved. The program also tracks, and regularly reports weekly and monthly, on defect totals and metrics to NOAA management. However, there are several areas in which defect management policies and practices are inconsistent, including in performing and recording information pertinent to individual defects, and in reporting and tracking defect information. Table 8 provides an assessment of how the GOES-R program and key contractors performed on each of the best practices, and is followed by a more detailed discussion of shortfalls. Among the shortfalls seen in table 8 are a number of cross-cutting themes: Variation among contractors in managing and reporting metrics: The GOES-R program affords its contractors and subcontractors wide latitude in making decisions on how to manage, track, and report defects, which results in variation among program components. For example, the program established a minimum set of metrics that must be reported and recorded for each software defect, but has not done so for hardware defects. No clear definition of defects: In its guidance, NOAA did not fully define the terminology of defect management. As a result, the program and contractors use terms such as defect, anomaly, nonconformance, incident reports, and trouble reports without explaining clearly how they are related to each other. Without understanding these relationships, variations from expected performance are likely to be treated differently throughout the program. For example, some issues that occur after formal integration and testing are complete are not considered as defects, which means that they are not all reported on the statistics provided to program management. In another case, an issue on instrument data algorithms was uncovered during user testing and required rework; however, it was not counted as a defect. Program officials stated that this was a documentation issue and noted that they do not track documentation issues as defects. However, the issue affected more than documentation. It was addressed by reworking the software algorithms; thus, it should have been tracked as a software defect. No clear definition of priority/severity: In its contract requirements and other guidance documents, the GOES-R program did not establish specific guidance to its contractors on how to prioritize or establish the severity of defects. At most, documents list a severity classification as one of many defect attributes that contractors should review as necessary. A program official explained that, for hardware defects, there is no programwide policy for defect metrics, including priority or severity. As a result, the information tracked, trended, and reported to management on defect totals also varied by contractor, and thus often by instrument or component. Unrealistic testing schedule: Effective defect management requires a realistic schedule in that it takes time to be able to fully identify, analyze, prioritize, resolve, and track defects. However, in May 2014, an internal NOAA report stated that the current program testing schedule is “unrealistically aggressive.” The GOES-R program has chosen to compress its remaining testing schedule, which increases the risk that there will not be sufficient time to address defects before moving to the next stage of testing. Effects of this type can already be seen; the program reported that some defects that were open for a long period of time have been delayed due to the need for all available resources to be applied in resolving new, current defects. Limited trend analysis: While the program and NOAA’s mission assurance team have analyzed contractor-provided trends in the volume and severity of defects over time for the ground system and spacecraft, they do not routinely analyze trend data for all components. For example, on the ABI and GLM instruments, the program and mission assurance team do not analyze trend data monthly for hardware defects. Instead, these groups only assess individual defect reports for this subset of defects. It is important for the program to assess defect trends over time in order to provide to management a more complete picture of testing status. Assessing trends in defect handling can result in better resource allocation to components of greatest need, and more attention to the effectiveness of resolving defects. Although the GOES program has defect management policies in place and its contractors are actively tracking, analyzing, and reporting on defects, the discrepancies between contractors’ data could cause issues in completing the remainder of the program’s integration and testing period. For example, without consistent defect metrics, it is more difficult for managers to obtain a complete picture of the status of open, closed, and high priority defects across the program. Program officials stated that the program did not contractually specify any best practices, but that it assumed that contractors with high-level professional certifications would employ all practices necessary for the development effort. Unless the program can find a way to unite these disparate approaches to provide consistency in the methods by which defects are identified, prioritized, captured, and tracked, it will be more difficult for management to analyze and understand trends in opening and closing high-priority defects or to make decisions on how to best resolve the defects. Moreover, until the program addresses shortfalls in its defect management processes, it may not have a complete picture of remaining defects and runs the risk of not having sufficient time to resolve them. While effectively and efficiently addressing and resolving defects is an important part of a sound system testing approach, the GOES-R program has not efficiently closed defects on selected components. As noted earlier, the program does not obtain or maintain defect trend data for the program as a whole; however, data on individual components and portions of components show that a large number of defects remain open, including several high-priority defects. Specifically, data for the GOES ground system shows that 500 defects remained open as of September 2014, including 36 high-priority defects. Defect data for the spacecraft show that it is taking an increasing amount of time to close hardware- related defects, but that the program is making progress in closing software defects. Specifically, as of April 2014, 42 software and 332 hardware defects were unresolved. Defect totals on GOES instruments declined as the program approached the deadline for them to be completed in order to be integrated with the spacecraft. Specifically, for the GLM instrument, the total number of both hardware and software defects declined. Hardware defects declined from 117 in May 2014 to 13 in September 2014, and there were no remaining software defects by January 2014. For the ABI instrument, the number of newly opened hardware and software defects each month has declined over time, with only one unresolved defect remaining in July 2014. Table 9 depicts summary information on defects for selected components at different points in time. In addition, appendix II provides more information on defect trends for these components. Program officials noted that in some cases, due to time concerns, lower- priority defects which have been determined to not have a major effect on performance are not closed until later in the testing process. Also, program officials have stated that they are having difficulty in closing defect-related incident reports due to insufficient manpower. Until the program reduces the number of longstanding unresolved defects, it faces an increased risk of further delays to the GOES-R launch date should an open defect affect future performance. GOES satellite data are considered a mission-essential function because of their criticality to weather observations and forecasts. These forecasts—such as those for severe storms, hurricanes, and tornadoes— can have a substantial impact on our nation’s people, infrastructure, and economy. Because of the importance of GOES satellite data, NOAA’s policy is to have two operational satellites and one backup satellite in orbit at all times. This policy proved useful in December 2008 and again in September 2012, when the agency experienced problems with one of its operational satellites, but was able to move its backup satellite into place until the problems had been resolved. However, NOAA is facing a period of up to 17 months when it will not have a backup satellite in orbit. Specifically, in April 2015, NOAA expects to retire one of its operational satellites (GOES-13) and to move its backup satellite (GOES-14) into operation. Thus, the agency will have only two operational satellites in orbit—and no backup satellite—until GOES-R is launched and completes an estimated 6-month post-launch test period. If GOES-R is launched in March 2016, the earliest it could be available for operational use would be September 2016. Figure 9 shows the potential gap in backup coverage, based on the launch and decommission dates of GOES satellites. During the time in which no backup satellite would be available, there is a greater risk that NOAA would need to either rely on older satellites that are beyond their expected operational lives and may not be fully functional, request foreign satellite coverage, or to operate with only a single operational satellite. Agency officials stated that the risk of a gap may be reduced, because NOAA satellites have historically remained in operation longer than their expected life. While many satellites outlive their expected lifespans, the current GOES satellites are operating with reduced functionality, and one has experienced two major outages. Without a full complement of operational GOES satellites, the nation’s ability to maintain the continuity of data required for effective weather forecasting could be compromised. This, in turn, could put the public, property, and the economy at risk. Any delay to the GOES-R launch date would extend the time without a backup to more than 17 months. As discussed earlier in this report, further delays to the committed launch date of the first GOES-R satellite are possible due to continued technical issues encountered during testing and integration. Government and industry best practices call for the development of contingency plans to maintain an organization’s essential functions—such as GOES satellite data—in the case of an adverse event. In September 2013, we reported on weaknesses in the contingency plans for NOAA’s satellites. At that time, we compared NOAA’s plans to 17 best practices associated with three main areas: identifying failure scenarios and impacts, developing contingency plans, and validating and implementing contingency plans. We reported that while NOAA identified failure scenarios, recovery priorities, and minimum levels of acceptable performance, the satellite contingency plan contained areas that fell short of best practices, such as working with the user community to account for potential reductions in capability under contingency operations. Furthermore, the agency did not identify alternative solutions or timelines for preventing a GOES-R launch delay. In February 2014, NOAA released a new satellite contingency plan in response to these recommendations. This plan improved in comparison to many, but not all, of the best practices. Specifically, the plan improved in 6 areas and stayed the same in 4 areas. Table 10 compares our assessment of the current satellite contingency plan with our September 2013 analysis for all best practices that were not fully met. Program officials stated that it is not feasible to include strategies to prevent delays in launch of the first GOES-R satellite in the contingency plan, because such strategies are not static. They explained that options for preventing a delay vary greatly over time based on issues that occur during the satellite’s development. They further stated that the program’s monthly presentations to NOAA management provide a summary of current threats to the launch date and strategies to mitigate those threats. For example, NOAA is considering whether or not to remove or delay selected ground system functions to post-launch, and the program provides monthly updates on this issue. While actively managing the program to avoid a delay is critical, it is also important that NOAA management and the GOES-R program consider and document feasible alternatives for avoiding or limiting such a launch delay. This will allow stakeholders throughout NOAA to be aware of, respond to, and plan for the potential implementation of each alternative, not only the small number of alternatives the program is actively considering in any given month. Until NOAA addresses the remaining shortfalls in its GOES-R gap mitigation plan, the agency cannot be assured that it is exploring all alternatives or that are able to effectively prepare to receive GOES information in the event of a failure. After spending 10 years and just over $5 billion, the GOES-R program is nearing the launch of its first satellite. However, it continues to face challenges in maintaining its schedule and controlling its costs. The program continues to experience delays in remaining major milestones, which could result in further delays to the launch date. Costs are increasing faster than expected for key program components and contractor data is often inconsistent from month to month. Until the agency ensures its contractor cost data are consistent, it will be more difficult for managers and program officials to make financial projections and assess reserve needs and usage. As the GOES-R program progresses through its testing and integration phase, it is essential that it is able to appropriately handle defects that arise during testing. While NOAA and its contractors have implemented a defect management process which is successful in many areas, there are shortfalls in how the program defines defects, monitors trends, and reports on defects and defect metrics. In particular, NOAA has not established a standard set of metric information for all defects, including the dates on which defects were identified and resolved, or an indication of a defect’s severity. Also, NOAA did not clearly define the type of issue that constitutes a defect. Furthermore, multiple defects remain open on some program components, including several that have remained open for more than six months. Until the program addresses these shortfalls and reduces the number of open defects, it may not have a complete picture of remaining issues and faces an increased risk of further delays to the GOES-R launch date. NOAA could experience a gap in satellite data coverage if GOES-R is delayed further and one of the two remaining operational satellites experiences a problem. NOAA has made improvements to its satellite contingency plan, but the plan still does not sufficiently address mitigation options for a launch delay, potential impacts, or minimum performance levels. Until such information is available, it will be difficult to integrate mitigation efforts, or to coordinate with users in the event of a failure. To address risks in the GOES-R program development and to help ensure that the satellite is launched on time, we are making the following four recommendations to the Secretary of Commerce. Specifically, we recommend that the Secretary of Commerce direct the NOAA Administrator to: investigate and address inconsistencies totaling hundreds of thousands of dollars in monthly earned value data reporting for the GLM and ABI instruments; address shortfalls in defect management identified in this report, including the lack of clear guidance on defect definitions, what defect metrics should be collected and reported, and how to establish a defect’s priority or severity; and reduce the number of unresolved defects on the GOES ground system and spacecraft. In addition, because NOAA has not fully implemented our prior recommendation to improve its satellite gap mitigation plan, we recommend that the Secretary of Commerce direct the NOAA Administrator to: add information to the GOES satellite contingency plan on steps planned or underway to mitigate potential launch delays, the potential impact of failure scenarios in the plan, and the minimum performance levels expected under such scenarios. We sought comments on a draft of our report from the Department of Commerce and NASA. We received written comments from the Deputy Secretary of Commerce transmitting NOAA’s comments. NOAA concurred with all four of our recommendations and identified steps that it plans to take to implement them. It also provided technical comments, which we have incorporated into our report, as appropriate. NOAA’s comments are reprinted in appendix III. On November 14, 2014, an audit liaison for NASA provided an e-mail stating that the agency would provide any input it might have to NOAA for inclusion in NOAA’s comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Secretary of Commerce, the Administrator of NASA, the Director of the Office of Management and Budget, and other interested parties. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to (1) assess progress on the GOES-R program with respect to planned schedule, cost, and functionality; (2) assess efforts to identify and address issues discovered during integration and testing; and (3) evaluate the likelihood of a gap in satellite coverage and analyze the adequacy of contingency actions in place to prevent or mitigate such a gap. To assess NOAA’s progress on the GOES-R program with respect to planned schedule, cost, and functionality, and to identify risks that could lead to further schedule delays, we analyzed data from monthly program management meetings. We evaluated progress made in completing key program components and major program reviews, and compared planned and actual completion dates for key program milestones over the last two to three years to determine the degree to which these dates have changed. To ensure that the program’s schedule data were consistent and reliable, we compared milestone data over several months and contacted agency officials to corroborate events that occurred over the course of our engagement. We analyzed earned value management (EVM) data to compare levels of cost variance over time for key program components, and calculated earned value management metrics using program cost performance reports. To ensure that the program’s cost data were reliable, we compared formulas from the GAO Cost Guide to the program’s EVM approach. We also compared EVM data across a series of monthly program cost performance reports. In doing so, we found inconsistencies in the monthly EVM data for selected components, and reported on those inconsistencies in this report. However, the data were sufficient for our purpose of assessing overruns because these inconsistencies were small in comparison to cost variances. We assessed recently enacted and potential changes in functionality for key program components. We also interviewed program officials regarding changes in schedule milestones, cost performance and reserve funding, and functionality. To assess efforts to identify and address issues discovered during integration and testing, we identified best practices in defect management from leading industry and government organizations. We compared NOAA policy documents and defect management artifacts by the GOES program, its contractors, and an independent NOAA mission assurance group to the best practices. We selected three components, which are critical to the program’s mission requirements, and evaluated recent test results and defects. Specifically, we identified two recent tests for each component and analyzed artifacts associated with defects identified during those tests. We compared the defects against each best practice to determine if NOAA and its contractors fully implemented, partially implemented, or did not implement the best practices. The agency and contractor had to meet all aspects of the best practice to achieve a fully implemented score, some aspects to achieve a partially implemented score, and no aspects to achieve a not implemented score. To identify the program’s recent performance in closing and managing defects, we analyzed basic defect trend information—such as number of defects opened and closed, and defect severity—for each of several program components. We developed charts to demonstrate defect trends. To ensure the data were reliable, we compared defect data from individual defect reports to agency trend charts, and sought corroboration from agency and contractor officials. We also interviewed agency and contractor officials to discuss their defect management processes and practices, and to confirm information we found while analyzing individual defect reports and trend charts. To evaluate the likelihood of a gap in satellite coverage, we reviewed monthly program management presentations and other review board documentation. To analyze the adequacy of contingency actions in place to prevent or mitigate such a gap, we compared NOAA’s latest satellite contingency plan to best practice criteria from industry and government. We focused specifically on the ten areas we identified as weak in our prior report. For each of the ten areas, we rated NOAA’s contingency plan as having partially or fully implemented the best practice criteria. In addition, we interviewed agency officials regarding the likely amount of a gap in on-orbit backup coverage of GOES satellites, the potential for a gap in operational coverage, and efforts to improve the GOES contingency plan. We conducted this performance audit from January 2014 to December 2014, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As noted earlier, the GOES-R program does not obtain or maintain defect trend data for the program as a whole; however, data on individual components and portions of components show that a large number of defects remain open, including several high-priority defects. The following sections provide more details on recent data trends for the GOES-R ground system and spacecraft as well as the ABI and GLM instruments. It is important to note that the sections do not provide consistent information among components because the program does not require consistent metrics and the contractors document different data. As of the end of September 2014, the ground system had 500 open defects, an increase from 342 open defects at the end of September 2013. Also, as of the end of September 2014, 96 defects that were identified prior to January 2014 remained open, and 36 high-priority defects (those rated as critical or moderate) remained open. Program and contractor officials provided rationale and insight into the mitigating circumstances surrounding these defects. Program officials stated that the number of open defects during the integration and test phase is as expected for a ground system of a magnitude similar to GOES. Contractor officials reported that none of the 96 longstanding defects were in either of the two highest severity categories. They also noted that most of the open high-priority defects had been opened during testing events conducted over the previous 3 months, which is a reasonable length of time to close a defect. Furthermore, according to contractor officials, many of the longstanding open defects were in categories such as documentation-related defects which are considered less severe. Officials also stated that defects in these categories are often kept open for specific reasons, such as to gain cost efficiencies or to wait for an already-planned test event rather than creating a new test event. However, as of September 2013, 193 of the 342 defects were not related to documentation and this number rose to 416 of the 500 open defects in September 2014. Figure 10 shows the increase in open defects in the period from September 2013 to August 2014 for the GOES-R ground system. As allowed by NOAA’s guidance, contractors on the spacecraft track defects differently than the ground system contractors do. Thus, it is not possible to compare opened and closed defects over time as depicted above for the ground system. Instead, spacecraft contractors track hardware and software defects independently. They also track when new defects are identified and how long they stay open. For spacecraft hardware, the defect data show that the average time it took to resolve defects increased over time. Specifically, the average age for hardware defects increased from 99 days in May 2013 to a high of 167 days in April 2014, at which point 58 percent of all open defects had been open for at least 90 days. Program officials stated that they began continuous 24-hour a day, 7-day a week testing in October 2014, which means that they should be able to make progress in closing the remaining defects. While the backlog of hardware-related defects on the spacecraft remains high, NOAA has been effective in recent months in greatly reducing the number of software-related defects on the spacecraft. The total number of open software-related defects increased and remained high through February 2014, but then declined significantly in March and April 2014. Defect data for the GLM instrument showed a decline in the total number of defects for both hardware and software components. While there were 117 unresolved hardware defects in May 2014, only 13 hardware defects remained unresolved as of September 2014. Trend data also showed a decline in the number of unresolved software defects. Specifically, the total number of software-related defects remaining open never increased above 7 over the period from March 2013 to January 2014. For half the year, there was no more than one open defect. Metrics for the ABI instrument show that all hardware defects and all but one software defect have been closed, likely because the first unit of the instrument was completed. The number of hardware defects occurring each month declined to near zero during the period February 2013 to April 2014. Less than ten ABI software defects were open at any point from September 2012 onward, and only 11 defects were newly opened during that time. Figure 11 shows the number of opened and closed ABI hardware defects by month, and figure 12 shows opened and closed defects for ABI software. In addition to the contact named above, individuals making contributions to this report included Colleen Phillips (assistant director), Alexander Anderegg, Christopher Businsky, Shaun Byrnes, James MacAulay, and Karl Seifert.
|
NOAA, with the aid of the National Aeronautics and Space Administration (NASA), is procuring the next generation of geostationary weather satellites. The GOES-R series is to replace the current series of satellites, which will likely begin to reach the end of their useful lives in 2015. This new series is considered critical to the United States' ability to maintain the continuity of satellite data required for weather forecasting through 2036. GAO was asked to evaluate GOES-R. GAO's objectives were to (1) assess progress on program schedule, cost, and functionality; (2) assess efforts to identify and address issues discovered during integration and testing; and (3) evaluate the likelihood of a gap in satellite coverage and actions to prevent or mitigate such a gap. To do so, GAO analyzed program and contractor data, earned value data information, and defect reports, compared both defect management policies and contingency plans to best practices by leading organizations, and interviewed officials at NOAA and NASA. The National Oceanic and Atmospheric Administration (NOAA)'s Geostationary Operational Environmental Satellite-R (GOES-R) program has made major progress in developing its first satellite, including completing testing of satellite instruments. However, the program continues to face challenges in the areas of schedule, cost, and functionality. Specifically, the program has continued to experience delays in major milestones and cost overruns on key components. Also, in order to meet the planned launch date, the program has deferred some planned functionality until after launch, and program officials acknowledge that they may defer more. NOAA and its contractors have implemented a defect management process as part of their overall testing approach that allows them to identify, assess, track, resolve, and report on defects. However, shortfalls remain in how defects are analyzed and reported. For example, contractors manage, track, and report defects differently due to a lack of guidance from NOAA. Without consistency among contractors, it is difficult for management to effectively prioritize and oversee defect handling. In addition, more than 800 defects in key program components remained unresolved. Until the program makes progress in addressing these defects, it may not have a complete picture of remaining issues and faces an increased risk of further delays to the GOES-R launch date. As the GOES-R program approaches its expected launch date of March 2016, it faces a potential gap of more than a year during which an on-orbit backup satellite would not be available. This means that if an operational satellite experiences a problem, there could be a gap in GOES coverage. NOAA has improved its plan to mitigate gaps in satellite coverage. However, the revised plan does not include steps for mitigating a delayed launch, or details on potential impacts and minimum performance levels should a gap occur. Until these shortfalls are addressed, NOAA management cannot fully assess all gap mitigation strategies, which in turn could hinder the ability of meteorologists to observe and report on severe weather conditions. GAO is recommending that NOAA address shortfalls in its defect management approach, reduce the number of open high-priority defects, and add information to its satellite contingency plan. NOAA concurred with GAO's recommendations and identified steps it plans to take to implement them.
|
As an arm of the legislative branch, GAO exists to support the Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people. Today, GAO is a multidisciplinary professional services organization, comprised of about 3,250 employees, that conducts a wide range of financial and performance audits, program evaluations, management reviews, investigations, and legal services spanning a broad range of government programs and functions. GAO’s work covers everything from the challenges of securing our homeland, to the demands of an information age, to emerging national security threats, and the complexities of globalization. We are committed to transforming how the federal government does business and to helping government agencies become organizations that are more results oriented and accountable to the public. We are also committed to leading by example in all major management areas. Given GAO’s role as a key provider of information and analyses to the Congress, maintaining the right mix of technical knowledge and subject matter expertise as well as general analytical skills is vital to achieving the agency’s mission. Carrying out GAO’s mission today is a multidisciplinary staff reflecting the diversity of knowledge and competencies needed to deliver a wide array of products and services to support the Congress. Our mission staff—at least 67 percent of whom have graduate degrees—hold degrees in a variety of academic disciplines, such as accounting, law, engineering, public administration, economics, and social and physical sciences. I am extremely proud of our GAO employees and the difference that they make for the Congress and the nation. They make GAO the world-class organization that it is, and I think it is fair to say that while they account for about 80 percent of our costs, they constitute 100 percent of our real assets. Because of our unique role as an independent overseer of federal expenditures, fact finder, and honest broker, GAO has evolved into an agency with hybrid systems. This is particularly evident in GAO’s personnel and performance management systems. Unlike many executive branch agencies, which have either recently received or are just requesting new broad-based human capital tools and flexibilities, GAO has had certain human capital tools and flexibilities for over two decades. As a result, we have been able to some extent to operate our personnel system with a degree of independence that most agencies in the executive branch do not have. For example, we are excepted from certain provisions of Title 5, which governs the competitive service, and we are not subject to Office of Personnel Management (OPM) oversight. Until 1980, our personnel system was indistinguishable from those of executive branch agencies—that is, GAO was subject to the same laws, regulations, and policies as they were. However, with the expansion of GAO’s role in congressional oversight of federal agencies and programs, concerns grew about the potential for conflicts of interest. Could GAO conduct independent and objective reviews of executive branch agencies, such as OPM, when these agencies had the authority to review GAO’s internal personnel activities? As a result, GAO worked with the Congress to pass the GAO Personnel Act of 1980, the principal goal of which was to avoid potential conflicts by making GAO’s personnel system more independent of the executive branch. Along with this independence, the act gave GAO greater flexibility in hiring and managing its workforce. Among other things, it granted the Comptroller General authority to appoint, promote, and assign employees without regard to Title 5 requirements in these areas; set employees’ pay without regard to the federal government’s General Schedule (GS) pay system’s classification standards and requirements; and establish a merit pay system for appropriate officers and employees. By excepting our agency from the above requirements, the GAO Personnel Act of 1980 allowed us to pursue some significant innovations in managing our people. One key innovation was the establishment of a “broad banding,” or “pay banding,” approach for classifying and paying our Analyst and Attorney workforce in 1989. This was coupled with the adoption of a pay for performance system for this portion of our workforce. Therefore, while other agencies are only now requesting the authority to establish broad banding and pay for performance systems, GAO has had almost 15 years of experience with such systems. Although GAO’s personnel and pay systems are not similar to those of many executive branch agencies, I must emphasize that in important ways, our human capital policies and programs are very much and will continue to remain similar to those of the larger federal community. GAO’s current human capital proposal will not change our continued support for certain national goals (e.g., commitment to federal merit principles, protection from prohibited personnel practices, employee due process through a specially created entity—the Personnel Appeals Board (PAB), and application of veterans’ preference consistent with its application in the executive branch for appointments and all appropriate reductions-in- force). Furthermore, our pay system is and will continue to be consistent with the statutory principle of equal pay for equal work while making pay distinctions on the basis of an individual’s responsibilities and performance. In addition, we are covered and will remain covered by Title VII of the Civil Rights Act, which forbids employment discrimination. At GAO, we also emphasize opportunity and inclusiveness for a diverse workforce and have zero tolerance for discrimination of any kind. We have taken and will continue to take disciplinary action when it “will promote the efficiency of the service”—which for us includes such things as GAO’s ability to do its work and accomplish its mission. Although we are not subject to OPM oversight, we are nevertheless subject to the oversight of the Congress including our appropriations committees—the Senate Committee on Appropriations’ Subcommittee on the Legislative Branch and the House Committee on Appropriations’ Subcommittee on Legislative—and our oversight committees—the Senate Committee on Governmental Affairs and the House Committee on Government Reform. In addition, GAO’s management actions are subject to the review of an independent five member board, the Personnel Appeals Board, which performs functions similar to those provided by the Merit Systems Protection Board for federal executive branch employees’ personnel grievances. The Congress authorized the establishment of the PAB specifically for GAO in order to protect GAO’s independence as an agency. As with other federal executive branch employees, our employees have the right to appeal certain kinds of management actions including removal, suspension for more than 14 days, reductions in pay or grade, furloughs of not more than 30 days, a prohibited personnel practice, an action involving prohibited discrimination, a prohibited political activity, a within-grade denial, unfair labor practices or other labor relations issue. However, they do so to the PAB rather than the MSPB. While we currently do not have any bargaining units at GAO, our employees are free to join employee organizations, including unions. In addition, we engage in a range of ongoing communication and coordination efforts to empower our employees while tapping their ideas. For example, we regularly discuss a range of issues of mutual interest and concern with our democratically elected Employee Advisory Council (EAC). Chris Keisling, who is a Band III field office representative of the EAC, is testifying with me today. In addition, I consult regularly with our managing directors on issues of mutual interest and concern. In that spirit, I will consult with the managing directors and the EAC before implementing the provisions related to our human capital proposal. As we did with the flexibilities granted it under Public Law 106-303, the GAO Personnel Flexibilities Act, we will implement the authorities granted under this provision of our proposal only after issuing draft regulations and providing all employees notice and an opportunity for comment. Specifically, for the authorities granted to us under Public Law 106-303, we posted the draft regulations on our internal Web site and sent a notice to all GAO staff advising them of the draft regulations and seeking their comments. GAO’s proposal combines diverse initiatives that, collectively, should further GAO’s ability to enhance our performance, assure our accountability, and help ensure that we can attract, retain, motivate, and reward a top quality and high-performing workforce currently and in future years. These initiatives should also have the benefit of helping guide other agencies in their human capital transformation efforts. Specifically, we are requesting that the Congress provide us the following additional human capital tools and flexibilities: make permanent GAO’s 3-year authority to offer voluntary early retirement and voluntary separation payments; allow the Comptroller General to adjust the rates of basic pay of GAO on a separate basis than the annual adjustments authorized for employees of the executive branch; permit GAO to set the pay of an employee demoted as a result of workforce restructuring or reclassification at his or her current rate with no automatic annual increase to basic pay until his or her salary is less than the maximum rate of their new position; provide authority in appropriate circumstances to reimburse employees for some relocation expenses when that transfer does not meet current legal requirements for entitlement to reimbursement but still benefits GAO; provide authority to put upper-level hires with less than 3 years of federal experience in the 6-hour leave category; authorize an executive exchange program with private sector organizations working in areas of mutual concern and involving areas in which GAO has a supply-demand imbalance; and change GAO’s legal name from the “General Accounting Office” to the “Government Accountability Office.” I will go into more detail later in my testimony on the details and rationale for each of these proposals. In developing our proposal, we used a phased approach that involved (1) developing a straw proposal, (2) vetting the straw proposal broadly both externally and internally, and (3) making appropriate adjustments based on comments and concerns raised during the vetting process. As we have previously testified, many of the management tools and flexibilities we needed to pursue modern human capital management approaches are already available to us and we have used them. We have chosen to come to the Congress for legislation only where the tools and flexibilities we have were inadequate for addressing the challenges we faced. For example, the Congress enacted Public Law 106-303 to provide us with certain narrowly tailored flexibilities we needed to reshape our workforce and establish senior-level technical positions in critical areas. These flexibilities were needed to help GAO address the past decade’s dramatic downsizing (approximately 40 percent from 1992 through 1997) combined with a significant increase in the retirement-eligible workforce that jeopardized our ability to perform our mission in the years ahead. In developing our preliminary proposal, we gathered suggestions for addressing GAO’s human capital challenges as well as challenges faced by the rest of the federal government, discussed and debated them internally, and compiled a preliminary list of proposals. We received a number of viable proposals that we separated into two groups: (1) proposals that would be more applicable government-wide and (2) proposals GAO should undertake. I had our Office of General Counsel review the proposals GAO should undertake to determine whether we needed to seek legislative authority to implement them or whether I could implement them under the Comptroller General’s existing authority. Mindful of the need to keep the Congress appropriately informed, my staff and I began our outreach to GAO’s appropriations and oversight committees on the need for additional human capital flexibilities beginning late last year. In early spring of this year, we shared with these committees a confidential draft of a preliminary draft proposal. We also advised them that we planned to conduct a broad range of outreach and consultation on the proposal with our employees and other interested parties and that we would send them our revised legislative proposal at a later date. We conducted an extensive outreach and consultation effort with members of the Congress, including chairmen and ranking minority members of our appropriations and oversight committees and a number of local delegation members; congressional staff; the Director of OPM; the Deputy Director for Management of the Office of Management and Budget; public sector employee associations and unions; and various “good government” organizations. Within GAO, members of the Executive Committee (EC), which includes our Chief Operating Officer, our General Counsel, our Chief Mission Support Officer and me, engaged in an extensive and unprecedented range of outreach and consultation with GAO employees. This outreach included numerous discussions with our managing directors, who manage most of GAO’s workforce, and members of the EAC. The EAC is an important source of input and a key communications link between executive management and the constituent groups its members represent. Comprising employees who represent a cross-section of the agency, the EAC meets at least quarterly with me and members of our senior executive team. The EAC’s participation is an important source of front-end input and feedback on our human capital and other major management initiatives. Specifically, EAC members convey the views and concerns of the groups they represent, while remaining sensitive to the collective best interest of all GAO employees; propose solutions to concerns raised by employees; provide input to and comment on GAO policies, procedures, plans, and practices; and help to communicate management’s issues and concerns to employees. I have also used my periodic “CG chats,” closed circuit televised broadcasts to all GAO employees, as a means of explaining our proposal and responding to staff concerns and questions. Specifically, I have held two televised chats to inform GAO staff about the proposal. One of these chats was conducted in the form of a general listening session, open to all headquarters and field office staff, featuring questions from members of the EAC and field office employees. I have also discussed the proposal with the Band IIs (GS-13-14 equivalents) in sessions held in April 2003, and with our Senior Executive Service (SES) and Senior Level members at our May off-site meeting. In addition to my CG chats, I have personally held a number of listening sessions, including a session with members of our Office of General Counsel, two sessions with our administrative support staff, and sessions with staff in several field offices. Furthermore, the Chief Operating Officer represented me in a listening session with Band I field office personnel. Finally, I have also personally received and considered a number of E-mails, notes, and verbal comments on the human capital proposal. I would like to point out to others seeking human capital flexibilities that the outreach process, while necessary, is indeed time-consuming and requires real and persistent commitment on the part of an agency’s top management team. In order for the process to work effectively, it also requires an ongoing education and dialogue process that will, at times, involve candid, yet constructive, discussion between management and employees. This is, however, both necessary and appropriate as part of the overall change management process. To facilitate the education process on the proposal, we posted materials on GAO’s internal website, including Questions and Answers developed in response to employees’ questions and concerns, for all employees to review. Unfortunately, others who have sought and are seeking additional human capital flexibilities have not employed such an extensive outreach process. Based on feedback from GAO employees, there is little or no concern relating to most of the provisions in our proposal. There has been significant concern expressed over GAO’s proposal to decouple GAO’s pay system from that of the executive branch. Some concerns have also been expressed regarding the pay retention provision and the proposed name change. As addressed below, we do believe, however, that these employee concerns, have been reduced considerably due to the clarifications, changes, and commitments resulting from our extensive outreach and consultation effort. On the basis of various forms of GAO employee feedback, it is not surprising, since pay is important to all employees, that the provision that has caused the most stir within GAO has been the pay adjustment provision. Fundamentally, some of our employees would prefer to remain with the executive branch’s GS system for various types of pay increases. There are others close to retirement who are concerned with their “high three” and how the modified pay system, when fully implemented, might affect permanent base pay, which is the key component of their retirement annuity computation. Overall, there is a great desire on the part of GAO employees to know specifically how this authority would be implemented. It is important to note that, even in the best of circumstances, it is difficult to garner a broad-based consensus of employee support for any major pay system changes. While it is my impression, based on employee feedback, that we have made significant strides in allaying the significant initial concerns expressed by employees regarding the pay adjustment provision, I believe that some of these concerns will remain throughout implementation. In addition, some can never be resolved because they involve philosophical differences or personal interest considerations on behalf of individual GAO employees. GAO’s history with pay banding certainly is illustrative of how difficult it is for an organization to allay employee fears even in the face of obvious benefits. While history has proven that an overwhelming majority of GAO employees have benefited from GAO’s decision to migrate our Analysts and Attorneys into pay banding and pay for performance systems, there was significant opposition by GAO employees regarding the decision to move into these systems. The experience of the executive branch’s pay demonstration projects involving federal science and technology laboratories shows that employee support at the beginning of the pay demonstration projects ranged from 34 percent to 63 percent. In fact, OPM reports that it takes about 5 years to get support from two-thirds of employees with managers generally supporting demonstrations at a higher rate than employees. Following the pay adjustment provision but a distant second in terms of employee concern, has been the pay reclassification provision, which would allow GAO employees demoted as a result of workforce restructuring or reclassification to keep their basic pay rates; however, future pay increases would be set consistent with the new positions’ pay parameters. Currently, employees subject to a reduction-in-force or reclassification can be paid at a rate that exceeds the value of their duties for an extended period. A distant third in terms of employee concern is the proposed name change from the “General Accounting Office’ to the “Government Accountability Office,” which would allow the agency’s title to more accurately reflect its mission, core values, and work. My sense is that some GAO employees who have been with GAO for many years have grown comfortable with the name and may prefer to keep it. At the same time, I believe that a significant majority of our employees support the proposed name change. Importantly, all of our external advisory groups, including the Comptroller General’s Advisory Council, consisting of distinguished individuals from the public and private sectors, and the Comptroller General’s Educators Advisory Council, consisting of distinguished individuals from the academic community, and a variety of “good government” groups strongly support the proposed name change. The members of the EC and I took our employees’ feedback seriously and have seriously considered their concerns. Key considerations in our decision making were our institutional responsibility as leaders and stewards of GAO and the overwhelming support expressed through anonymous balloting by our senior executives, who also serve as leaders and stewards for GAO, for proceeding with all of the provisions of our human capital proposal, including the pay adjustment provision. Specifically, in a recent confidential electronic balloting of our senior executives, support for each element of our proposal ranged from over 2 to 1 to unanimous, depending on the provision. Support for the proposed pay adjustment provision was over 3 to 1, and support for the proposed pay protection provision was over 4 to 1. Given this and other considerations, ultimately, we decided to proceed with the proposal but adopted a number of the suggestions made by employees in these sessions, including several relating to the proposal to decouple GAO annual pay adjustments from those applicable to many executive branch agencies. A key suggestion adopted include a minimum 2-year transition period for ensuring the smooth implementation of the pay provisions which would also allow time for developing appropriate methodologies and issuing regulations for notice and comment by all employees. Another key suggestion adopted was the commitment to guarantee annual across the board purchase power protection and to address locality pay considerations to all employees rated as performing at a satisfactory level or above (i.e., meeting expectations or above) absent extraordinary economic circumstances or severe budgetary constraints. We have chosen to implement this guarantee through a future GAO Order rather than through legislative language because prior “pay protection” guarantees relating to pay banding made by my predecessor, Comptroller General Charles A. Bowsher, used this means effectively to document and operationalize that guarantee. I have committed to our employees that I would include this guarantee in my statement here today so that it could be included as part of the legislative record. Additional safeguards relating to our pay proposal are set forth below. The following represents additional information regarding our specific proposal. Section 2 of our proposal would make permanent the authority of GAO under section 1 and 2 of Public Law 106-303, the GAO Personnel Flexibilities Act of 2000, to offer voluntary early retirements (commonly termed “early outs”) and voluntary separation payments (commonly termed “buyouts”) to certain GAO employees when necessary to realign GAO’s workforce in order to meet budgetary or mission needs, correct skill imbalances, or reduce high-grade positions. We believe that we have behaved responsibly in exercising the flexibilities that the Congress granted us and deserve a permanent continuation of these authorities. In addition, the two flexibilities which we would like to be made permanent are narrowly drawn and voluntary in nature, since the employees have the right to decide if they are interested in being considered for the benefits. Further, the provisions also have built in limits: no more than 10 percent of the workforce in any one year can be given early outs and no more than 5 percent can be given buyouts. GAO’s transformation effort is a work in progress, and for that reason, the agency is seeking legislation to make the voluntary early retirement provision in section 1 of the law permanent. While the overall number of employees electing early retirement has been relatively small, GAO believes that careful use of voluntary early retirement has been an important tool in incrementally improving the agency’s overall human capital profile. Each separation has freed resources for other uses, enabling GAO to fill an entry-level position or to fill a position that will reduce a skill gap or address other succession concerns. Similarly, we are seeking legislation to make section 2—authorizing the payment of voluntary separation incentives—permanent. Although GAO has not yet used its buyout authority and has no plans to do so in the foreseeable future, we are seeking to retain this flexibility. The continuation of this provision maximizes the options available to the agency to deal with future circumstances, which cannot be reasonably anticipated at this time. Importantly, this provision seems fully appropriate since the Homeland Security Act of 2002 provides most federal agencies with permanent early out and buyout authority. Public Law 106-303 required that GAO perform an assessment of the exercise of the authorities provided under that law, which included the authority for the Comptroller General to provide voluntary early retirement and voluntary separation incentive payments. With your permission, I would like to submit the assessment entitled Assessment of Public Law 106-303: The Role of Personnel Flexibilities in Strengthening GAO’s Human Capital, issued on June 27, 2003, for the record. I will now highlight for you our observations from that assessment on voluntary early retirement and buyouts. Voluntary Early Retirement Public Law 106-303 also allows the Comptroller General to offer voluntary early retirement to up to 10 percent of the workforce when necessary or appropriate to realign the workforce to address budgetary or mission constraints; correct skill imbalances; or reduce high-grade, supervisory, or managerial positions. This flexibility represents a proactive use of early retirement to shape the workforce to prevent or ameliorate future problems. GAO Order 2931.1, Voluntary Early Retirement, containing the agency’s final regulations, was issued in April 2001. Under the regulations, each time the Comptroller General approves a voluntary early retirement opportunity, he establishes the categories of employees who are eligible to apply. These categories are based on the need to ensure that those employees who are eligible to request voluntary early retirement are those whose separations are consistent with one or more of the three reasons for which the Comptroller General may authorize early retirements. Pursuant to GAO’s regulations, these categories are defined in terms of one or more of the following criteria: organizational unit or subunits, grade or band level, skill or knowledge requirements, other similar factors that the Comptroller General deems necessary and appropriate. Since it is essential that GAO retain employees with critical skills as well as its highest performers, certain categories of employees have been ineligible under the criteria. Some examples of ineligible categories are employees receiving retention allowances because of their unusually high or unique qualifications; economists, because of the difficulty that the agency has experienced in recruiting them; and staff in the information technology area. In addition, employees with performance appraisal averages above a specified level have not been eligible under the criteria. To give the fullest consideration to all interested employees, however, any employee may apply for consideration when an early retirement opportunity is announced, even if he or she does not meet the stated criteria. Furthermore, under our order, the Comptroller General may authorize early retirements for these applicants on the basis of the facts and circumstances of each case. The Comptroller General or his EC designee considers each applicant and makes final decisions based on GAO’s institutional needs. Only employees whose release is consistent with the law and GAO’s objective in allowing early retirement are authorized to retire early. In some cases, this has meant that an employee’s request must be denied. GAO held its first voluntary early retirement opportunity in July 2001. Employees who were approved for early retirement were required to separate in the first quarter of fiscal 2002. As required by the act, information on the fiscal 2002 early retirements was reported in an appendix to our 2002 Performance and Accountability Report. Another voluntary early retirement opportunity was authorized in fiscal 2003, and employees were required to separate by March 14, 2003. In anticipation of the 3-year sunset on our authority to provide voluntary early retirements, I have recently announced a final voluntary early retirement opportunity under our current authority. Table 1 provides the data on the number of employees separated by voluntary early retirement as of May 30, 2003. As you can see from the table, of the 79 employees who separated from GAO through voluntary early retirement, 66, or 83.5 percent, were high- grade, supervisory, or managerial employees. High-grade, supervisory, or managerial employees are those who are GS-13s or above, if covered by GAO’s GS system; Band IIs or above, if covered by GAO’s banded systems for Analysts and Attorneys; or in any position in GAO’s SES or Senior-Level system. In recommending that GAO’s voluntary early out authority be made permanent, I would like to point to our progress in changing the overall shape of the organization. The 1990s were a difficult period for ensuring that GAO’s workforce would remain appropriately sized, shaped, and skilled to meet client demands and agency needs. Severe downsizing of the workforce, including a suspension of most hiring from 1992 through 1997, and constrained investments in such areas as training, performance incentives, rewards, and enabling technology left GAO with a range of human capital and operational challenges to address. Over 3 years ago, when GAO sought additional human capital flexibilities, our workforce was sparse at the entry level and plentiful at the midlevel. We were concerned about our ability to support the Congress with experienced and knowledgeable staff over time, given the significant percentage of the agency’s senior managers and analysts reaching retirement eligibility and the small number of entry-level employees who were training to replace more senior staff. As illustrated in figure 1, by the end of fiscal year 2002, GAO had almost a 74 percent increase in the proportion of staff at the entry level (Band I) compared with fiscal year 1998. Also, the proportion of the agency’s workforce at the midlevel (Band II) decreased by 16 percent. In addition to authorizing voluntary early retirement for GAO employees, Public Law 106-303 permits the Comptroller General to offer voluntary separation incentive payments—buyouts—when necessary or appropriate to realign the workforce to meet budgetary constraints or mission needs; correct skill imbalances; or reduce high-grade, supervisory, or managerial positions. Under the act, up to 5 percent of employees could be offered such an incentive, subject to criteria established by the Comptroller General. The act requires GAO to deposit into the U.S. Treasury an amount equivalent to 45 percent of the final annual basic salary of each employee to whom a buyout is paid. The deposit is in addition to the actual buyout amount, which can be up to $25,000 for an approved individual. Given the many demands on agency resources, these costs present a strong financial disincentive to use the provision if at all. GAO anticipates little, if any, use of this authority because of the associated costs. For this reason, as well as to avoid creating unrealistic employee expectations, GAO has not developed and issued agency regulations to implement this section of the act. Nevertheless, as stated earlier, it is prudent for us to seek the continuation of this provision because it maximizes the options available to the agency to deal with future circumstances. Since GAO is also eligible to request buyouts under the provisions of the Homeland Security Act, the agency will consider its options under this provision as well. However, under the Homeland Security Act, GAO would have to seek OPM approval of any buyouts, which raises serious independence concerns. Section 3 and 4 of our proposal would provide GAO greater discretion in determining the annual across the board and locality pay increases for our employees. Under our proposal, GAO would have the discretion to set annual pay increases by taking into account alternative methodologies from those used by the executive branch and various other factors, such as extraordinary economic conditions or serious budgetary constraints. While the authority requested may initially appear to be broad based, there are compelling reasons why GAO ought to be given such authority. First, as I discussed at the beginning of my testimony, GAO is an agency within the legislative branch and already has a hybrid pay system established under the authority the Congress granted over two decades ago. Therefore, our proposal represents a natural evolution in GAO’s pay for performance system. Second, GAO’s proposal is not radical if viewed from the vantage point of the broad-based authority that has been granted the Department of Homeland Security (DHS) under the Homeland Security Act of 2002; agencies that the Congress has already granted the authority to develop their own pay systems; the authorities granted to various demonstration projects over the past two decades; and the authority Congress is currently contemplating providing the Department of Defense (DOD). Third, GAO already has a number of key safeguards and has plans to build additional safeguards into our modified pay system if granted this authority. Our proposal seeks to take a constructive step in addressing what has been widely recognized as fundamental flaws in the federal government’s approach to white-collar pay. These flaws and the need for reform have been addressed in more detail in OPM’s April 2002 White Paper, A Fresh Start For Federal Pay: A Case for Modernization, and more recently the National Commission on the Public Service’s January 2003 report on revitalizing the public service. The current federal pay and classification system was established over 60 years ago for a federal workforce that was made up largely of clerks performing routine tasks which were relatively simple to assess and measure. Today’s federal workforce is composed of much higher graded and knowledge-based workers. Although there have been attempts over the years to refine the system by enacting such legislation as the Federal Employees Pay Comparability Act (FEPCA) which sought to address, among other things, the issue of pay comparability with the nonfederal sector, the system still contains certain fundamental flaws. The current system emphasizes placing employees in a relative hierarchy of positions based on grade; is a “one size fits all approach” since it does not recognize changes in local market rates for different occupations; and is performance insensitive in that all employees are eligible for the automatic across the board pay increases regardless of their performance. Specifically, the annual across the board base pay increase, also commonly referred to as the cost of living adjustment (COLA) or the January Pay Increase which the President recommends and the Congress approves, provides a time driven annual raise keyed to the Employment Cost Index (ECI) to all employees regardless of performance. In certain geographic areas, employees receive a locality adjustment tied to the local labor markets. However, in calculating the locality adjustment, for example, it is my understanding that FEPCA requires the calculation of a single average, based on the dominant federal employer in an area, which does not sufficiently recognize the differences in pay rates for different occupations and skills. In view of the fact that today we are in a knowledge- based economy competing for the best knowledge workers in the job market, I believe that new approaches and methodologies are warranted. This is especially appropriate for GAO’s highly educated and skilled workforce. Our proposed pay adjustment provision along with the other provisions of GAO’s human capital proposal are collectively designed to help GAO maintain a competitive advantage in attracting, motivating, retaining, and rewarding a high performing and top-quality workforce both currently and in future years. First, under our proposal, GAO would no longer be required to provide automatic pay increases to employees who are rated as performing at a below satisfactory level. Second, when the proposal is fully implemented, GAO would be able to allocate more of the funding— currently allocated for automatic across-the-board pay adjustments to all employees—to permanent base pay adjustments that would vary based on performance. In addition, our proposal would affect all GAO, non-wage grade employees, including the SES and Senior Level staff. Ultimately, if GAO is granted this authority, all GAO employees who perform at a satisfactory level will receive an annual base pay adjustment composed of purchase power protection and locality based pay increases absent extraordinary economic circumstances or severe budgetary constraints. GAO will be able to develop and apply its own methodology for annual cost-of-living and locality pay adjustments. The locality pay increase would be based on compensation surveys conducted by GAO and which would be tailored to the nature, skills, and composition of GAO’s workforce. The performance part of an employee’s annual raise would depend on the level of the employee’s performance and that employee’s pay band. We estimate that at least 95 percent of the workforce will qualify for an additional performance-based increase. However, under this provision, employees who perform below a satisfactory level will not receive an annual increase of either type. GAO’s major non-SES pay groups include (1) Analysts and Attorneys which comprises the majority of our workforce and is our mission group, (2) the Professional Development Program staff (PDP) which is our entry level mission group, (3) the Administrative Professional Support Staff (APSS), which is our mission support group for the most part, and (4) Wage Grade employees who primarily operate our print plant. Each of these groups currently operate in a different pay system. Generally, our mission staff are all in pay bands whereby they currently receive the annual across-the- board base pay increase and locality pay increase similar to the GS pay system, along with performance-based annual increases that are based on merit. Generally, our mission support staff, with some exceptions, remain in a system similar to the GS pay system with its annual across- the-board pay increases, locality pay, quality step increases, and within grade increases. We are currently in the process of migrating the mission support staff into pay bands and a pay for performance system. Our Wage Grade staff will continue to be covered by the federal compensation system for trade, craft, and laboring employees. Because of the small number of employees and the nature of their work, we have no plans to apply the pay adjustment provision authority to this group. I would like to point out the tables in appendices I through IV, which succinctly describe how GAO plans to operationalize our authority under our proposed annual pay adjustment provision over time. GAO’s proposal for additional pay flexibility is reasonable in view of the authority the Congress has already granted DHS through the Homeland Security Act of 2002; the other agencies for whom the Congress has granted the authority to develop their own pay systems; the demonstration projects that OPM has authorized; and the authorities that other agencies in the executive branch are currently seeking (e.g., DOD). While we are aware that the passage of the Homeland Security Act of 2002 was not without its difficult moments, particularly with respect to the broad-based authorities granted the department, we are also aware that the process employed by DOD and certain of its human capital proposals are highly controversial. It is important to point out that GAO’s proposal and proposed pay flexibilities pale in respect to those granted to the DHS and to those requested by the DOD in the Defense Transformation for the 21st Century Act of 2003. Collectively, these two agencies represent almost 45 percent of the non-postal federal civilian workforce. Specifically, in November 2002, the Congress passed the Homeland Security Act of 2002, which created DHS and provided the department with significant flexibilities to design a modern human capital management system, which could have the potential, if properly developed, for application governmentwide. DOD’s proposed National Security Personnel System (NSPS) would provide wide-ranging changes to its civilian personnel pay and performance management systems, collective bargaining, rightsizing, and a variety of other human capital areas. NSPS would enable DOD to develop and implement a consistent, DOD-wide civilian personnel system. In addition to DHS, there are a number of federal agencies with authority for their own pay systems. Some of these agencies are, for example, the Congressional Budget Office, which is one of our sister agencies in the legislative branch; the Federal Aviation Administration (FAA); the Securities and Exchange Commission (SEC) ; and the Office of the Comptroller of the Currency (OCC) within the Department of the Treasury. When the Congress created the CBO in 1974, it granted that legislative branch agency significant flexibilities in the human capital area. For example, CBO has “at will” employment. In addition, CBO is not subject to the annual executive branch pay adjustments. Further, CBO has extensive flexibility regarding its pay system subject only to certain statutory annual compensation limits. Furthermore, there are twelve executive branch demonstration projects involving pay for performance. These projects have taken different approaches to the sources of funding for salary increases that are tied to performance and not provided as entitlements. Many of the demonstration projects reduce or deny the annual across the board base pay increase for employees with unacceptable ratings (e.g., the Department of Navy’s China Lake demonstration, DOD’s Civil Acquisition Workforce demonstration, the Department of Air Force’s Research Laboratory demonstration, and the Department of Navy’s Research Laboratory demonstration, among others.) Others, including the National Institute of Standards and Technology and the Department of Commerce demonstration projects, deny both the annual across the board base pay increase and the locality pay adjustment for employees with unacceptable ratings. Currently, this Congress is considering a NASA human capital proposal. This proposal would provide NASA with further flexibilities and authorities for attracting, retaining, developing, and reshaping a skilled workforce. These include a scholarship-for-service program; a streamlined hiring authority for certain scientific positions; larger and more flexible recruitment, relocation, and retention bonuses; noncompetitive conversions of term employees to permanent status; a more flexible critical pay authority; a more flexible limited-term appointment authority for the SES; and greater flexibility in determining annual leave accrual rate for new hires. As we have testified, agencies should have modern, effective, credible, and as appropriate, validated performance management systems in place with adequate safeguards, including reasonable transparency and appropriate accountability mechanisms, to ensure fairness and prevent politicization and abuse. While GAO’s transformation is a work in progress, we believe that we are in the lead compared to executive branch agencies in having the human capital infrastructure in place to provide such safeguards and implement a modified pay system that is more performance oriented. Specifically, for our Analyst pay group, we have gone through the first cycle of a validated performance management system that has adequate safeguards, including reasonable transparency and appropriate accountability mechanisms. We have learned from what has worked and what improvements can and should be made with respect to the first cycle. In fact, we have adopted many of the recommendations and suggestions of our managing directors and EAC and are now in the process of implementing these suggestions. The following is an initial list of possible safeguards, developed at the request of Congressman Danny Davis, for Congress to consider to help ensure that any pay for performance systems in the government are fair, effective, and credible. GAO’s current human capital infrastructure has most of these safeguards built in, and the others are in the process of being incorporated. Assure that the agency’s performance management systems (1) link to the agency’s strategic plan, related goals, and desired outcomes and (2) result in meaningful distinctions in individual employee performance. This should include consideration of critical competencies and achievement of concrete results. Involve employees, their representatives, and other stakeholders in the design of the system, including having employees directly involved in validating any related competencies, as appropriate. Ensure that certain predecisional internal safeguards exist to help achieve the consistency, equity, nondiscrimination, and nonpoliticization of the performance management process (e.g., independent reasonableness reviews by the human capital offices and/or the offices of opportunity and inclusiveness or its equivalent in establishing and implementing a performance appraisal system, as well as reviews of performance rating decisions, pay determinations, and promotion actions before they are finalized to ensure that they are merit-based; internal grievance processes to address employee complaints; and pay panels predominately made up of career officials who would consider the results of the performance appraisal process and other information in making final pay decisions). Assure reasonable transparency and appropriate accountability mechanisms in connection with the results of the performance management process (e.g., publish overall results of performance management and pay decisions while protecting individual confidentiality, and report periodically on internal assessments and employee survey results). We have provided a statutory period minimum to allow for a smooth implementation of the law as it applies to both our mission and mission support staff. Specifically, for our Analyst and Attorney communities, we plan to allow for at least a two-year period, during which they will continue to receive their annual across the board pay raise and their locality pay, if applicable, based on the amount set by the GS system. Once the proposal is fully implemented, the new across-the-board increase, which provides for inflation protection and locality pay where applicable, would be computed based on GAO compensation studies, and the performance- based merit pay would be provided based on an employee’s performance. For our APSS employees, the transition period of at least 2 years would allow for a smooth migration to the pay bands and the implementation of at least one performance cycle of a newly validated competency based performance appraisal system for that component of GAO’s workforce. Our APSS employees are currently still in the GS system, but we are in the process of moving them into pay bands. We will allow time for the group to migrate to broad bands and to have at least one performance cycle under pay bands before moving it into the new pay system. Therefore, as with the analysts and attorneys, the administrative support staff will move into a hybrid pay system once they migrate to pay bands. Also, as with the analysts and attorneys, I have committed to providing them “pay protection.” This guarantee would continue even after GAO’s authority to adjust pay is fully implemented. We have a small Wage Grade community of under 20 employees. As mentioned earlier, we do not contemplate having the pay adjustment provision apply to them. “Pay Protection” Guarantee My predecessor, Comptroller General Charles A. Bowsher, provided the analysts and attorneys a “pay protection” guarantee at the time of their conversion to broad bands. This guarantee, later spelled out in a GAO order, provided that the analyst and attorneys rated as meeting expectations in all categories would fare at least as well under pay bands as under the GS system. This guarantee would not apply to employees who are promoted after conversion or demoted, and to new employees hired after the conversion. It is my understanding that this guarantee provided by my predecessor is unique to GAO and has generally not been applied by other agencies that have migrated their employees to pay bands. Currently, 535 GAO employees are still covered by this “pay protection” guarantee, while less than 10 employees annually have their pay readjusted after the merit pay process. I have committed to GAO employees that even if we receive the new pay adjustment authority, I would still honor my predecessor’s pay protection guarantee. In addition, our mission support staff will also receive this guarantee upon conversion to pay bands. This guarantee will continue through the implementation period for our new human capital authority. Section 5 of our proposal would allow GAO not to provide any automatic increase in basic pay to an employee demoted as a result of workforce restructuring or reclassification at his or her current rate until his or her salary is less than the maximum rate of the new position. Under current law, the grade and pay retention provisions allow employees to continue to be paid at a rate that exceeds the value of the duties they are performing for an extended period. Specifically, employees who are demoted (e.g., incur a loss of grade or band) due to, among other things, reduction-in-force procedures or reclassification receive full statutory pay increases for 2 years and then receive 50 percent of the statutory pay increases until the pay of their new positions falls within the range of pay for those positions. We believe that this antiquated system is inconsistent with the merit principle that there should be equal pay for work of equal value. In granting GAO this authority, we would be able to immediately place employees in the band or grade commensurate with their roles and responsibilities. It is important to note that we have a key safeguard— employees whose basic pay exceeds the maximum rate of the grade or band in which the employee is placed will not have their basic pay reduced. These employees, who would still be eligible to increase their overall pay through certain types of performance-based awards (e.g., incentive awards), would retain this rate until their basic pay is less than the maximum for their grade or band. As with all the provisions in our proposal, we will not implement this pay retention provision until we have consulted with the EAC and managing directors and have provided all GAO employees an opportunity for notice and comment on any regulations. Section 6 would provide GAO the authority, in appropriate circumstances, to reimburse employees for some relocation expenses when transfers do not meet current legal requirements for entitlement to reimbursement but still benefit GAO. Under current law, employees who qualify for relocation benefits are entitled to full benefits; however, employees whose transfer may be of some benefit or value to the agency would not be eligible to receive any reimbursement. This provision would provide these employees some relief from the high cost of relocating while at the same time allowing GAO the flexibility to promulgate regulations in order to provide such relief. This authority has been previously granted to other agencies, including the FAA. Section 7 of the proposal provides GAO the authority to provide 160 hours (20 days) of annual leave to appropriate employees in high-grade, managerial or supervisory positions who have less than 3 years of federal service. This is narrowly tailored authority that would apply only to GAO and not to executive branch agencies. While it is been a long-standing tenet that all federal employees earn annual leave based on years of federal service, we believe that there is substantial merit in revisiting this in view of today’s human capital environment and challenges. We have found that, in recruiting experienced mid- and upper-level hires, the loss of leave they would incur upon moving from the private to the federal sector is a major disincentive. For example, an individual, regardless of the level at which he enters first enters the federal workforce, is eligible to earn 4 hours of annual leave for each pay period and, therefore, could accrue a total of 104 hours (13 days) annually so long as they do not use any of that leave during the year. This amount increases to 6 hours of annual leave after 3 years of federal service. By increasing the annual leave that certain newly hired officers and employees may earn, this provision is designed to help attract and retain highly skilled employees needed to best serve the Congress and the country. Section 8 would authorize GAO to establish an executive exchange program between GAO and private sector entities. Currently, GAO has the authority to conduct such an exchange with public entities and non profit organizations under the Intergovernmental Personnel Act; there is no such authority for private sector exchanges. Under this program, high-grade, managerial or supervisory employees from GAO may work in the private sector, and private sector employees may work at GAO. While GAO will establish the details of this program in duly promulgated regulations, we have generally fashioned, with exceptions where appropriate, the legal framework for this program on the Information Technology Exchange Program authorized by Public Law 107-347, the E-Government Act of 2002, which the Congress enacted to address human capital challenges within the executive branch in the information technology area. While the Information Technology Exchange Program only involves technology exchanges, GAO’s exchange program will cover not only those who work in information technology fields, but also accountants, economists, lawyers, actuaries, and other highly skilled professionals. This program will help us address certain skills imbalances in such areas as well as a range of succession planning challenges. Specifically, by fiscal year 2007, 52 percent of our senior executives, 37 percent of our management- level analysts, and 29 percent of our analysts and related staff will be eligible for retirement. Moreover, at a time when a significant percentage of our workforce is nearing retirement age, marketplace, demographic, economic, and technological changes indicate that competition for skilled employees will be greater in the future, making the challenge of attracting and retaining talent even more complex. One of the key concerns raised in the past regarding private sector exchange programs has been the issue of conflict of interest. We believe that in this regard GAO differs from executive branch agencies in that, as reviewers, we are not as subject to potential conflicts of interest. Nevertheless, it is important to note in requesting this authority that we have made clear that the private sector participants would be subject to the same laws and regulations regarding conflict of interest, financial disclosure, and standards of conduct applicable to all employees of GAO. Under the program, private sector participants would receive their salaries and benefits from their employers and GAO need not contribute to these costs. We also believe that this will also encourage private sector individuals to devote a portion of their careers to the public sector without incurring substantial financial sacrifice. Section 9 would change the name of our agency from the “General Accounting Office” to the “Government Accountability Office.” At the same time, the well-known acronym “GAO,” which has over 80 years of history behind it, will be maintained. We believe that the new name will better reflect the current mission of GAO as incorporated into its strategic plan, which was developed in consultation with the Congress. As stated in GAO’s strategic plan, our activities are designed to ensure the executive branch’s accountability to the American people. Indeed, the word accountability is one of GAO’s core values along with integrity and reliability. These core values are also incorporated in GAO’s strategic plan for serving the Congress. The GAO of today is a far cry from the GAO of 1921, the year that the Congress established it through the enactment of the Budget and Accounting Act. In 1921, GAO pre-audited agency vouchers for the legality, propriety, and accuracy of expenditures. In the 1950s, GAO’s statutory work shifted to the comprehensive auditing of government agencies. Later, beginning during the tenure of Comptroller General Elmer B. Staats, GAO’s work expanded to include program evaluation and policy analysis. Whereas GAO’s workforce consisted primarily of accounting clerks during the first three decades of its existence, today it is a multidisciplinary professional services organization with staff reflecting the diversity of knowledge and skills needed to deliver a wide range of services to the Congress. Although currently less than 15 percent of agency resources are devoted to traditional auditing and accounting activities, members of the public, the press, as well as the Congress often incorrectly assume that GAO is still solely a financial auditing organization. In addition, our name clearly confuses many potential applicants, who assume that GAO is only interested in hiring accountants. We believe that the new name will help attract applicants and address certain “expectation gaps” that exist outside of GAO. In conclusion, I believe that GAO’s human capital proposal merits prompt passage by this committee and, ultimately, the Congress. We have used the narrowly tailored flexibilities the Congress provided us previously in Public Law 106-303 responsibly, prudently, and strategically to help posture GAO to ensure the accountability of the federal government for the benefit of the Congress and the American people. Although some elements of our initial straw proposal were controversial, we have made a number of changes, clarifications, and commitments to address various comments and concerns raised by GAO employees. We recognize that the pay adjustment provision of this proposal remains of concern to some of our staff. However, we believe that it is vitally important to GAO’s future that we continue modernizing and updating our human capital policies and system in light of the changing environment and anticipated challenges ahead. We believe that the proposal as presented and envisioned is well reasoned and reasonable with adequate safeguards for GAO employees. Given our human capital infrastructure and our unique role in leading by example in major management areas, including human capital management, the federal government could benefit from GAO’s experience with pay for performance systems. Overall, we believe that this proposal represents a logical incremental advancement in modernizing GAO’s human capital policies, and with your support, we believe that it will make a big difference for the GAO of the future. Chairwoman Jo Ann Davis, Mr. Davis, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions you may have. For further information regarding this testimony, please contact Sallyanne Harper, Chief Mission Support Officer, on (202) 512-5800 or at [email protected] or Jesse Hoskins, Chief Human Capital Officer, on (202) 512-5553 or at [email protected]. (Broad band) Pay Protection from Band Conversion) Conversion) (Same percentage as executive branch GS; performers) performers) (Same percentage as executive branch GS; performers) performers) EC annually) additional performance- based funds limited due will vary over time) guarantee) N/A This element is not applicable circumstances or serious budgetary constraints, base pay and locality pay according to the same adjustment provided to executive branch employees. All such GAO staff will also be eligible for additional performance-based merit pay increases, performance bonuses (if pay capped)/dividends, and incentive awards. During the transition period, GAO will continue to raise the pay cap for its pay bands commensurate with executive branch pay cap increases absent extraordinary economic circumstances or serious budgetary constraints. The Executive Committee will determine on an annual basis which categories, if any, are eligible for bonuses and dividends. (Broad band/PDP) as executive branch GS; performers) performers) as executive branch GS; performers) performers) EC annually) EC annually) EC annually) N/A This element is not applicable The percentage allocated to each type of pay increase varies annually. Executive Committee will determine on an annual basis which pay categories, if any, are eligible for PDP bonuses. (Broad band) Pay Protection from Band Conversion (GS) branch GS) for all satisfactory performers) branch GS) for all satisfactory performers) EC annually) amount will vary over time) N/A This element is not applicable The percentage allocated to each type of pay increase varies annually. This chart applies only to APSS employees who are under the General Schedule (GS) system. APSS employees who are already in broad bands should see the chart for Analysts and Attorneys. guarantee will not apply to staff who are promoted after conversion or demoted and to new employees hired after the conversion. APSS staff will be eligible for performance-based merit increases, performance bonuses (if pay capped) /dividends, and incentive awards. During the transition period, GAO will continue to raise the pay cap for its pay bands commensurate with executive branch pay cap increases. The Executive Committee will determine on an annual basis which pay categories, if any, are eligible for bonuses and dividends. (Wage Grade) Quality step increase (QSI) Within grade increase (WIG) This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Subcommittee on Civil Service and Agency Organization, House Committee on Government Reform seeks GAO's views on its latest human capital proposal that is slated to be introduced as a bill entitled the GAO Human Capital Reform Act of 2003. As an arm of the legislative branch, GAO exists to support the Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the American people. Unlike many executive branch agencies, which have either recently received or are just requesting new broad-based human capital tools and flexibilities, GAO has had certain human capital tools and flexibilities for over two decades. GAO's latest proposal combines diverse initiatives that, collectively, should further GAO's ability to enhance its performance, assure its accountability, and help ensure that it can attract, retain, motivate, and reward a top-quality and high-performing workforce currently and in future years. Specifically, GAO is requesting that the Congress (1) make permanent GAO's 3-year authority to offer early outs and buyouts, (2) allow GAO to set its own annual pay adjustment system separate from the executive branch, (3) permit GAO to set the pay of an employee demoted as a result of workforce restructuring or reclassification to keep his/her basic pay but to set future increases consistent with the new position's pay parameters, (4) provide authority to reimburse employees for some relocation expenses when that transfer has some benefit to GAO but does not meet the legal requirements for reimbursement, (5) provide authority to place upper-level hires with fewer than 3 years of federal experience in the 6-hour leave category, (6) authorize an executive exchange program with the private sector, and (7) change GAO's legal name from the "General Accounting Office" to the "Government Accountability Office." GAO has used the narrowly tailored flexibilities granted by the Congress previously in Public Law 106-303, the GAO Personnel Flexibilities Act, responsibly, prudently, and strategically. GAO believes that it is vitally important to its future to continue modernizing and updating its human capital policies and system in light of the changing environment and anticipated challenges ahead. GAO's proposal represents a logical incremental advancement in modernizing GAO's human capital policies. Based on employee feedback, there is little or no concern relating to most of the proposal's provisions. Although some elements of GAO's initial straw proposal were controversial (e.g., GAO's pay adjustment provision), the Comptroller General has made a number of changes, clarifications, and commitments to address employee concerns. While GAO believes that some employees remain concerned about the pay adjustment provision, GAO also believes that employee concerns have been reduced considerably due to the clarifications, changes, and commitments the Comptroller General has made. Given GAO's human capital infrastructure and unique role in leading by example in major management areas, the rest of the federal government can benefit from GAO's pay system experience.
|
The Rehabilitation Act, as amended, sets out a formula for distributing VR grants to states and territories. Through this formula, a portion of the funds appropriated for the VR program are distributed to states based upon the grant allotment they received for fiscal year 1978. States’ 1978 allotments served to ensure that no state experienced a funding decrease when the formula was revised through a 1978 amendment to the Rehabilitation Act. Of the remainder of the funds, one-half is distributed based upon states’ general population and a factor that compares their per capita income to the national per capita income, and the other one-half, according to their population and the square of the per capita income factor. The larger a state’s population, the more funds it will receive. Conversely, the higher a state’s per capita income compared to the national level, the lower its allotment will be. The squaring of per capita income increases its influence on a state’s allotment. However, the formula mitigates the effect of per capita income for states with very high or very low per capita income levels by setting upper and lower limits. Ultimately, the final allotment for a state cannot be less than 1/3 of 1 percent of the total amount appropriated, or $3 million dollars, whichever is greater. In fiscal year 2008, the minimum allotment was $9.5 million, and 6 states were allotted this amount. See appendix II for further information on the funding formula. The Act requires states to share in funding the costs of the VR program. Specifically, the Act sets the federal share for the funding of a state’s VR program at 78.7 percent. As a result, in order to receive their full federal allotment, each state must contribute at least 21.3 percent of the funds for their VR programs. In cases where states do not meet this matching requirement, the unmatched federal funds are redistributed to other states near the end of the fiscal year. The Act also calls for annual funding increases to the VR program, overall, to be minimally pegged to the increase in the Consumer Price Index for All Urban Consumers (CPI-U). However, funding changes for an individual state may differ from the change to the CPI-U in any given year because state allocations are, ultimately, determined by the funding formula. In redistributing the funds, Education currently gives priority to those states that did not receive an inflation-adjusted increase over their prior year’s allotment. Under the VR program, state VR agencies are to provide vocational rehabilitation services for individuals with disabilities—consistent with the individual’s strengths, resources, priorities, abilities, interests, and informed choice—so that they may prepare for and engage in gainful employment. To do so, state agencies provide a variety of services to individuals such as job placement assistance, medical treatment, postsecondary education, occupational training, and assistive technologies. Individuals may be eligible for VR services if they have a physical or mental impairment that constitutes or results in a substantial impediment to employment, and if they need VR services to prepare for, secure, retain, or regain employment. According to the Rehabilitation Act, if state VR agencies determine that they will not have enough funding to serve all eligible individuals who apply for services, they may state the order in which they will select individuals for services. Agencies using such an “order of selection” process must develop criteria for ensuring that individuals with the most significant disabilities will be selected first for services. The current VR funding formula does not include factors for rewarding agency performance; however, pursuant to the Rehabilitation Act, Education evaluates state VR agencies’ performance using a set of performance indicators. These indicators are designed to assess how well the agencies are helping individuals obtain, maintain, or regain high- quality employment and, also, how well they are ensuring that individuals from minority backgrounds have equal access to VR services. The Rehabilitation Act gives Education the authority to reduce or suspend payments to a state agency whose performance falls below a certain level and fails to enter into a program improvement plan or to substantially comply with the terms and conditions of such a plan. The VR funding formula does not achieve equity for beneficiaries—the individuals likely to be served by the VR program—for two reasons. First, it does not recognize differences among states in the size of their populations potentially needing VR services. Second, it does not account for state differences in the costs of providing those services. By targeting funds based on a state’s general population, the formula assumes that the proportion of people needing services is largely the same from state to state. In fact, as shown in figure 1, the proportion of the general population that is working-aged and has a disability varies across states, from 5.6 percent (in New Jersey) to 12.8 percent (in West Virginia) in 2007. See appendix III for information on disability rates in each state. In effect, the formula treats alike any two states with similar population sizes, irrespective of the size of their working-aged disability population. For example, New Mexico has a slightly greater population than West Virginia (2.0 million and 1.8 million, respectively), and, therefore, would receive more funding under the current formula (all other things being equal) than West Virginia. However, working-aged people with disabilities comprise nearly 13 percent of West Virginia’s population, compared to 8.7 percent in New Mexico. By not factoring in state disability populations, the formula does not account for West Virginia having over 60,000 more working-aged people with disabilities than New Mexico. Education officials and one expert we spoke with speculated that the formula’s use of per capita income might serve to target funds to states with higher rates of disability since people with disabilities have, on average, lower incomes. We found some correlation between states’ disability rates and their per capita income. As such, per capita income is, at best, an imprecise measure of states’ disability rates. The funding formula also fails to account for differences among states in the cost of providing VR services. Focusing on average wages and rents in each state, we estimated that the cost is 13 percent below the national average in Idaho, for example, while it is 13 percent above average in Massachusetts. This means that Massachusetts would need to pay $1.13 for the same set of services that Idaho could purchase for $0.87. By not taking into account cost differences, VR allocations purchase fewer services in states that have higher costs. See appendix IV for a table of our estimates of state cost differences. Also, see appendix I for information on how we estimated state service costs. Not accounting for state differences in both disability populations and cost of services results in a substantial variation in the amount of services that states are able to purchase per person with a disability, from a low of $83 in Connecticut, to a high of over three times as much—$277 in North Dakota. Figure 2 shows estimated state VR allotments, per working-aged person with a disability, based on fiscal year 2008 funding, adjusted for differences in costs of wages and rents between states. See appendix V for a state-by-state listing of VR grant allocations and cost-adjusted allotments per person with a disability. While proper measures of need and cost are important for both beneficiary and taxpayer equity, the VR funding formula lacks equity for state taxpayers, in particular, because its measure of a state’s ability to contribute to the VR program is limited to per capita income and does not include all potentially taxable resources. Per capita income is based on the personal income in a state, including income received by state residents in the form of wages, rents, and interest income. However, using only this measure excludes certain categories of corporate income that are not received as income by state residents. For example, the formula does not factor in corporate income that is retained by corporations for investment purposes, which could theoretically be subject to state taxation through corporate income taxes. The formula also excludes business income received by out-of-state residents, such as dividends, that are potentially taxable by the state. Although states may differ in their decisions about whether to tax these resources, the measure used in a funding formula to compare states’ ability to finance a program should capture all possible revenue resources and should not be affected by an individual state’s fiscal decisions. Treasury’s Total Taxable Resources provides more comprehensive data on the amount of resources that are potentially taxable in each state. Comparing states’ per capita income with their total taxable resources shows that, for most states, the two measures are similar. However, the formula’s use of per capita income particularly understates the taxable resources in certain states and overstates it in others (see figure 3). For example, the ratio of per capita income to total taxable resources per capita is 0.80 in Alaska, which suggests that the use of per capita income in the formula understates Alaska’s taxable resources by 20 percent. The formula’s use of per capita income especially understates the taxable resources in energy-exporting states, such as Alaska and Wyoming, and in states with numerous corporate headquarters, such as Delaware. The lack of precision in using per capita income is accentuated by the squaring of the per capita income factor in the formula. See appendix VI for a comparison of per capita income and total taxable resources in each state. Also, see appendix I for a more detailed explanation of our analyses of per capita income and total taxable resources data and appendix II for a detailed explanation of the current formula. In fiscal year 2008, 27 percent of federal VR funds were distributed to states based upon the amount of funds they received for fiscal year 1978. This provision of the formula served a purpose when the formula was last revised, in 1978, to ensure that no state experienced a funding decrease. However, most disability experts we spoke with considered this provision outdated and no longer an appropriate factor for distributing VR funds. The Congressional Research Service also reported that due to the 1978 allotment, VR funding allotments do not fully reflect population changes since the mid-1970s. Most state agencies that responded to our survey indicated satisfaction with the current formula. Of respondents, 62 percent (46 of 74) expressed the view that the current formula is appropriate, while 31 percent (23 of 74) viewed it as inappropriate. When asked about specific parts of the formula, opinions varied. For example, 86 percent (64 of 74) considered the use of general population to be appropriate, while only 27 percent (20 of 74) considered the 1978 allotment provision to be appropriate. See appendix IX for responses to our survey questions. However, in their comments to the survey and in interviews, some state agency officials asserted that the formula does not provide them with adequate funds. For instance, VR officials we spoke with in Massachusetts and Maryland said that due to current funding allotments, their agencies are on an “order of selection,” in which they give priority to individuals with significant disabilities and place other individuals on waiting lists. When we compared allotments per person with a disability against order of selection status, we found that states that receive less funding per person with a disability were somewhat more likely to report being under an order of selection than those states that receive relatively more funding. Specifically, we found in fiscal year 2008 that among states with lower than median allotments per person with a disability (adjusting for costs), 72 percent reported being under an order of selection, compared to 52 percent of states above the median. However, the data do not explain whether or the extent to which the VR funding formula is causing states to be under an order of selection. For example, many states above the median allotment are also under orders of selection. Further, in interviews state VR officials indicated that factors other than allotment levels could also influence a state’s decision to be under an order of selection, such as the level of state resources provided to the VR program, the effectiveness of the agency’s management of program costs, and the agency’s decisions about how to use existing funding. There are a number of ways to redesign the VR funding formula to achieve greater equity for beneficiaries, taxpayers, or to balance equity for both. We present three options, or prototypes, to illustrate the range of options. See appendix VII for a more detailed description of each formula option. For each of these options, we have retained the minimum allotment that the current formula provides to ensure that each state would receive at least a certain level of funds for its VR program. A partial beneficiary equity formula: This option bases allocations solely on the size of a state’s population potentially needing VR services. To measure the need population, this option would use data on the states’ civilian working-aged disability populations from the Census Bureau’s ACS. A full beneficiary equity formula: This option also allocates funds based on states’ working-aged disability populations using Census data, but also includes estimates of the cost of providing VR services in each state. These cost estimates reflect differences among states with respect to two basic costs (i.e., wages and rents), which underlay the provision of many VR services. We developed estimates of state costs using data on wages from BLS and on rents from HUD. This option does not reflect differences in other types of basic costs for which reliable data may not be readily available. See appendix I for further information on the development of our cost estimates. A taxpayer equity formula: This option also distributes funds based on states’ working-aged disability populations and the cost of providing VR services, but it adds a third factor to the formula—a measure of each state’s ability to contribute to the VR program. More funds would be allocated to states with fewer taxable resources. To measure a state’s ability to finance the VR program, we utilized data from Treasury on a state’s total taxable resources, which includes per capita income as well as other sources of potentially taxable state income, such as corporate income produced within the state but not received by state residents. For the taxpayer equity option, an issue to consider is whether the matching requirement would be the same across all states, as is the case with the current formula, or would vary based upon a state’s ability to finance the VR program. To fully achieve taxpayer equity, the matching requirement would need to vary according to each state’s financing ability. If the matching requirement were the same for all states, those with fewer resources would receive more federal funds but would also need to provide more state funds for the match. This could result in poorer states having to contribute a greater share of their resources to the VR program than wealthier states. See appendix VII for an explanation of how a variable matching requirement could be incorporated into the taxpayer equity option. Table 1 shows the amount of funds redistributed among states, as well as the number of states gaining and losing funds, for each of the three formula options. For example, each of our three prototypes would redistribute approximately 4 to 6 percent of the VR funds, with about 20 states receiving more funds and at least 20 states receiving less in funds than they do under the current formula. Between 5 and 11 states would experience a change in funding levels of 20 percent or more. See appendix VIII for a state-by-state table of allocations under our three formula options. In our survey of state VR agencies, many respondents expressed reservations about options for revising the current funding formula. Our survey presented state agencies with three general approaches to revising the formula, roughly based on our three formula options. Most respondents expressed reservations about options that were generally based on partial beneficiary and taxpayer equity, and they were divided on the option generally based on full beneficiary equity. Specifically, 45 percent of respondents expressed support for an approach that would distribute funds so that all states would receive funding to be able to provide the same level of services to each individual potentially eligible for VR services, taking into account certain differences in the cost of providing services, while 47 percent expressed disapproval of this approach, and the remainder expressed no opinion or preference. When a new federal formula is implemented, Congress often provides a transition period so that grant recipients have time to adjust, especially those recipients whose grants will be reduced. An abrupt reduction in funding level could disrupt a state agency’s ability to provide VR services. Transition periods allow for greater predictability and stability in state funding levels, which in turn, help avoid major disruptions to existing state services and allow states to develop long-range plans and program commitments. One way to ease the change to a new formula is to phase it in gradually over a number of years. During the phase-in period, the state allocations would be a combination of the old and new formulas, with a gradual increase in the portion of funding distributed through the new formula, until the phase-in period is complete. By way of example, figure 4 depicts a 5-year transition period, under which the amount of money allocated under the old formula would be reduced by 20 percent each year, and the amount allocated under the new formula would be increased by 20 percent each year, until all of the allocations are made using the new formula. To further minimize the disruptive effects of a new formula, the phase-in period could be longer, although this would, of course, postpone full use of the new formula. Another approach to minimize disruption to state VR programs is to establish a hold harmless provision that limits the amount of funding that states could lose under a new formula. One example of this approach would be to hold states entirely harmless in the first year that the new formula is implemented, but would allow minimal decreases during the second and successive years, such as by 1 or 2 percent. Because state agencies could also have difficulties adjusting to large and sudden funding increases, limits could also be set on the increases that states would receive from one year to the next. This graduated approach would allow agencies to better plan for the additional funds and manage growth in their VR programs. It should be noted that use of a hold harmless provision would effectively reduce the amount of funds available for distribution through the new formula in the early years of a change because most of the funds would be allocated through the hold harmless provision. However, over time, as the total amount of funds appropriated for the VR program increases, more of the funds would be allocated through the new formula. Some research and experiences suggest that providing financial incentive awards based on program performance has the potential to improve government programs, and a slight majority of VR agencies surveyed are open to using them in the VR program. Some federal programs currently provide incentive awards and officials we spoke with from three of these programs noted some benefits, such as motivating state and local agencies to improve performance. Of the state VR agency officials who responded to our survey, 59 percent were open to including some form of incentive awards in the VR program. Some state officials noted that doing so could reward high performing agencies, improve VR client success, or motivate agencies to focus on continuous improvement. Nevertheless, there are challenges to incorporating incentive awards into the VR program, whether through its funding formula or outside it due, in part, to the multiple and potentially competing facets of the VR program’s mission. According to the Rehabilitation Act, state VR programs should help clients achieve employment by providing individualized services, while also prioritizing service to those with the most significant disabilities when agencies cannot provide services to all eligible applicants. VR stakeholders we spoke with, including state agency officials, state advisory council officials, representatives from private sector VR companies, and disability researchers, identified three main challenges to incorporating incentive awards into the VR program. To some extent, these challenges are already present in the VR program’s current performance measurement system and could be accentuated by linking program performance to incentive awards. These challenges are: Challenge of balancing potentially competing program goals: Unless carefully designed, a financial performance incentive system could run the risk of encouraging state VR agencies to focus on achieving certain program goals, at the expense of others. For example, 89 percent of state VR officials who responded to our survey thought it likely that providing state agencies with additional funds based on performance would result in agencies focusing more heavily on clients who were expected to positively impact agency performance. In interviews, many VR stakeholders expressed particular concern that if incentive awards were focused on achieving employment outcomes for clients, they could induce state agencies to concentrate on serving those most likely to obtain employment, at the expense of those with greater barriers, such as those with the most significant disabilities. This would run counter to the VR program requirement that state agencies serve individuals with the most significant disabilities first when operating under an order of selection. Similar concerns have arisen in other employment programs that have used incentive awards. For example, we previously reported that local agency officials in the Department of Labor’s Workforce Investment Act (WIA) Title IB programs may be reluctant to provide services to job seekers less likely to find and maintain a job. Challenge of rewarding agencies for providing appropriate individualized services to clients: The VR program provides both long- term services, such as supporting youth with disabilities as they transition out of high school and pursue higher education, and short-term services, such as identifying job opportunities for people who already have skills and qualifications. Several VR stakeholders expressed concern that if incentive awards did not take into account some clients’ specialized needs for higher-cost or longer-term services, they could cause agencies to focus on providing short-term services. Officials from one state advisory council cautioned that incentive awards might encourage VR counselors to focus on providing short-term services, even if it resulted in low-paying jobs, instead of placing VR clients in higher education programs that could ultimately yield long-term, higher paying career positions. Challenge of basing incentive awards on an agency’s performance, without accounting for factors outside its control: A variety of factors outside an agency’s control may influence performance outcomes, such as the state’s economy and the characteristics and needs of the individuals who seek rehabilitation services. If an agency’s performance cannot be distinguished from these factors, the provision or withholding of incentive awards would not necessarily reflect agency actions. Of state VR agency officials who responded to our survey, 77 percent stated that isolating an agency’s performance from factors outside its control would be a great or very great challenge to appropriately distributing incentive awards. For example, officials in one VR agency said that some parts of their state have 20 percent unemployment, which decreases their ability to place clients in jobs. Another state agency official noted that his agency has had very high employment outcomes, in part because the state had one of the lowest unemployment rates in the country. He added that agencies in other states whose economies are weak have had poorer employment outcomes through no fault of their own. In addition, some state officials suggested that agencies operating under an order of selection would be at a competitive disadvantage compared to those that are not, because the caseload of the former would include a greater proportion of clients with the most significant disabilities and barriers to employment. Our research into the types of incentive awards used by other federal programs, as well as the views of VR stakeholders, revealed several ways to mitigate such challenges, but none are without potential pitfalls. Specifically, research on designing incentive award systems suggests the following options: Using multiple measures of success: Using a range of performance measures to determine incentive awards could help motivate state agencies to focus attention on all aspects of the VR program’s mission. For example, the performance of state VR agencies might be measured in terms of both the proportion of people with the most significant disabilities who find employment, and the proportion of all clients who achieve this outcome. Another option that could encourage agencies to provide long-term services, when appropriate, is to establish intermediate measures of client achievement or the services provided that have increased a client’s prospect for employment. Such an intermediate measure could be the number of VR clients who successfully complete training programs or college degrees. Nevertheless, there are challenges to developing appropriate measures. For example, although Education already uses a measure for the VR program focused on people with significant disabilities, it may be difficult to develop a measure specifically on people with the most significant disabilities because the Rehabilitation Act allows states to individually define the term, and our past work found that state agencies’ definitions vary. Although Education uses multiple measures to evaluate state VR agencies’ performance, an issue to consider is whether or not the current measures would be appropriate to use for distributing incentive awards. Our prior work on the VR program found that the current measures do not consider agencies’ success in assisting individuals who have not yet exited the program and do not specifically track outcomes for youth transitioning out of high school. As a result, we recommended that Education reevaluate its performance measures to determine whether they reflect the agency’s goals and values. Adjusting performance standards to account for differences in local economies and program clients: The level of performance required of each state VR agency to receive an incentive award could be adjusted to account for the challenges they face. For instance, the performance standard required to receive an incentive award may be set lower for agencies in states with poor economies than for states with better economies. These adjustments could be made using a mathematical model, negotiations between federal and state agencies, or a combination of the two approaches. For example, the Job Training Partnership Act program (JTPA) used mathematical models to quantify the relative effect of participants and economies on agency outcomes. Some researchers of the JTPA program found that this approach was perceived to “level the playing field” for agencies and had lessened the perverse incentives to focus more heavily on the most promising clients. However, the research also suggests that it is difficult to identify and measure all the external factors that can undermine or lessen agency performance. If key factors are missing from mathematical models, the adjusted performance standards may lead to inaccurate estimates. Taking a different approach, WIA Title IB employment and training programs set performance standards by negotiating with state agencies. Although advocates of this approach say that it increases agency involvement and may better capture qualitative factors that affect agency performance, we and others have criticized the WIA negotiation approach as unsystematic and inconsistent across states. Specifically, we have suggested using an approach that combines negotiations with mathematical models. In a prior report, we recommended that Labor develop an adjustment model or other systematic method to account for different populations and local economies when negotiating performance levels. Labor officials generally agreed with our recommendation and told us recently that are starting to provide states with adjustment models to inform t negotiation process. Finally, some researchers and public officials identified concerns about adjusting performance standards, regardless of the method. For example, some researchers have expressed concern that adjusting performance standards may be unfair to clients because it allows agencies to settle for less desirable outcomes for harder-to-serve populations. Beyond these risks, there are a number of considerations involved in deciding to incorporate incentive awards directly into the VR funding formula itself. First, it would be important to consider whether or not states should be required to match the additional funds allocated for performance, and if not, whether those funds could be treated as part of the state’s matching contribution. Another consideration is that rewarding high performance through the funding formula would, in effect, penalize other states insofar as it reduces the total funding available for distribution to all agencies based on other formula factors. Some state VR agency officials we spoke with suggested that incentive awards should not result in any decrease to base funding allocations. HUD’s Public Housing Capital Fund program, which provides incentive awards through its funding formula, includes provisions that minimize the impact of the awards on states’ funding. However, HUD officials we spoke with still expressed concern that this system penalizes housing authorities that may need additional funds if they are to improve their performance. Alternatively, incentive awards could be distributed independently of the VR funding formula to avoid these inherent penalties. Options for distributing funds outside the main VR funding formula include providing incentive awards as grants, as occurs in the WIA Title IB programs, or through a separate incentive award formula, as occurs in the Child Support Enforcement program. Of the state VR officials who responded to our survey, 51 percent supported providing incentive awards distributed independently of formula-determined funds, while only 22 percent supported providing the incentive awards through the formula. Finally, regardless of whether incentive awards are provided through the VR funding formula or independent of it, there are still other considerations involved in designing and carrying out an incentive award system. For instance, it is important that incentive awards are based on reliable data about agency performance. It is also important to consider the extent to which an incentive award system will allow for future modifications should there be changes in program priorities or available data, or if perverse and unanticipated results ensue. Our earlier work discusses these and other, related considerations. Although the measures currently used to allocate VR funds may have been the best available when the formula was last revised in 1978, better data are now available for factoring in both the potential need for and ability to support a program. Improved information offers policymakers the opportunity to update the formula to more closely align funding with the need for services, as well as with each state’s ability to contribute to the program. In deciding whether a revision to the formula is warranted, policymakers will likely want to consider how to strike a balance among all important factors—need, the cost of providing services, and the extent to which state resources are available. Certainly, revising the funding formula poses challenges because any formula change will result in funding decreases for some states, along with increases for others. However, there are ways to ease the transition to a new formula, so as to minimize disruption to VR programs and the people they serve. On the other hand, incorporating performance incentives into the formula might introduce more complexity and risk. While there are mechanisms to mitigate these challenges, the potential benefits for the VR program would need to be carefully weighed against the potential risks. We provided a draft of this report to Education for review and comment. Education provided technical comments and we modified the report, as appropriate, to address these comments. We are sending copies of this report to the Secretary of Education, relevant congressional committees, and other interested parties. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about our report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the report. Key contributors to this report are listed in appendix X. Our objectives were to: (1) assess the extent to which the funding formula meets generally accepted equity standards, (2) develop options for revising the formula to better meet these standards, and (3) identify issues to consider with incorporating performance incentives into the formula. We used two generally accepted formula design standards intended to achieve equity for beneficiaries and taxpayers. To meet both equity standards, a formula should use reliable and appropriate measures of the need population in each state and the cost of providing services in each state. A taxpayer equity formula additionally requires a reliable measure of a state’s ability to finance a program from its own resources. In the following sections, we describe how we measured the need population, cost of providing services, and financing capacity in each state, and how we analyzed the extent to which the current formula meets equity standards and developed various formula options. To address all three objectives, we also surveyed the 80 vocational rehabilitation (VR) agencies in the states, territories, and District of Columbia and conducted in-depth interviews with 11 VR agencies in 9 states. Finally, for the third objective, we reviewed literature on performance incentives and obtained the opinions of officials at 3 federal government programs that use incentive awards, which we describe in more detail in the last section. We conducted this performance audit from September 2008 to September 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Data that directly measure the number of people in a state who potentially need VR services do not exist. Although state VR agencies have data on the size of their own caseloads, these data are not appropriate for use in a funding formula for two reasons. First, caseloads may be influenced by state funding levels. For example, an agency’s caseload may be relatively small because of limited funds, not because of limited demand for services. Second, data that can be controlled by state agency officials should not be used in a funding formula because they could introduce some “undesirable incentives” into the program. For instance, if a state’s allotment were determined by the size of its caseload, a state agency might be rewarded for taking inappropriate actions, such as enrolling individuals into the VR program who do not require VR services or, in the case of an agency under an order of selection, enrolling individuals who do not meet its criteria for receiving priority for services. There are, however, several national surveys that provide estimates of the number of people with disabilities by state. These surveys are conducted by statistical agencies such as the Census Bureau and Bureau of Labor Statistics (BLS). We reviewed several of these surveys: (1) the Census Bureau’s American Community Survey (ACS), (2) the Decennial Census, (3) the Current Population Survey’s Basic Monthly Survey, (4) the Current Population Survey’s Annual Social and Economic Supplement, and (5) the Center for Disease Control’s Behavioral Risk Factor Surveillance System. We sought to identify data on the populations with all types of disabilities in each state. We ultimately selected the Census Bureau’s ACS to use as a measure of state populations potentially in need of VR services for several reasons. First, the ACS provides data on states’ disability populations on an annual basis. Second, the ACS has a large sample size (with about 3 million housing units surveyed across all 50 states, the District of Columbia, and Puerto Rico), which would allow for more accurate estimates of the need population in each state. Third, the ACS surveys people in more types of group quarters than any of the other surveys, such as those living in college dormitories, group homes, prisons, and nursing care facilities. This is significant, since about 10 percent of VR clients exiting the program in 2006 and 2007 lived in group quarters such as group homes or rehabilitation facilities when they applied to receive VR services, according to Education’s data. Fourth, the ACS asks six questions that are designed to capture a wide variety of disabilities (see table 2), and these questions are asked consistently across all states. One limitation of the ACS for purposes of allocating VR funds, however, is that the data are not available for U.S. territories, with the exception of Puerto Rico. We analyzed the data produced by only 5 of the 6 disability questions from the 2006 and 2007 ACS. We did not use data from the sixth question regarding the difficulty of working (question 6 in table 2) because the Census Bureau had removed this question in 2008 at the recommendation of an inter-agency task force. The 2008 ACS data is expected to be released in the fall of 2009 and was not available to use for this study. Since this question will not be included on future surveys, we sought to produce an analysis that would more closely reflect future available data. However, because changes were also made to each of the five other ACS questions for the 2008 survey, we cannot say whether our analysis of 2006 and 2007 data will be predictive of conditions in 2008 or thereafter. We measured each state’s disability population by counting the number of civilians of working age (16 to 64) who responded “yes” to any of the five disability questions. By excluding the difficulty working question, our measure excluded 12.5 percent of the population who responded “yes” to the difficulty working question. The remaining 87.5 percent of those who responded “yes” to this question were people who also responded “yes” to one or more of the five other disability questions. As a result, they were included in our measure. Table 3 provides a breakdown of the total U.S. population into the different components of our measure of the need population for the VR program. As shown in the table, our measure—the civilian working-aged population with a disability—comprises 7.6 percent of the total U.S. population. We assessed the reliability and validity of ACS data by interviewing Census Bureau officials and disability experts, reviewing documentation and literature, and conducting comparisons with other disability data. Specifically, we compared the ACS data with data from the Social Security Administration (SSA) on recipients of two types of disability benefits, Social Security Disability Insurance (SSDI) and Supplemental Security Income (SSI) benefits. For individuals to receive SSDI or SSI benefits, SSA or a state agency must first determine that they have a disability that prevents them from working. We compared the proportion of state populations with disabilities, according to ACS data, with the proportion of their populations receiving SSA disability benefits and found a high correlation—0.872 for SSDI and 0.788 for SSI. This indicates that ACS data and the SSA data showed similar trends; states with higher rates of disability according to ACS data also tended to have higher proportions of their population receiving SSA disability benefits. We had our work on identifying a measure of need population reviewed by three disability experts, and they concurred that ACS data provide a reasonable measure of the size of a state’s population potentially needing VR services. It is difficult to estimate differences among states in the costs of providing VR services. Although one approach for estimating cost differences is to estimate the costs for the same basket of goods in different states, attempting to do this for VR services would be extremely costly and labor- intensive because of the wide array of services that VR agencies provide, including assessment, counseling, higher education, occupational training, medical diagnosis and treatment, and transportation. Another challenge is finding a reliable data source on cost. State agencies’ data on expenditures, in part, reflect cost of services, but using these data in a formula runs the risk of allowing undesirable incentives to be introduced into the program. For example, a state agency that efficiently manages its program will be able to provide the same quality of services at a lower cost than an agency with less efficient management. In this case, if the formula provided higher allocations to states with higher reported costs, it could reward agencies that are more inefficient. In addition, expenditures data are of limited use in measuring state cost differences because they reflect many other factors besides costs. For example, the amount of funds that agencies spend per client reflects the level of funds they receive from state and federal sources, as well as the types of clients they serve and the types of services they provide. For example, some agencies may choose to provide more intensive, higher-cost services to a smaller number of clients, while other agencies may choose to serve a larger number of clients with lower-cost services. To address these challenges, we developed and used measures for the costs of resources—or inputs—that go into providing services, which are beyond the direct control of state agencies. Specifically, we focused on two basic inputs—labor and office space—that are needed to provide the different types of VR services. Where wages or rents are higher because the general cost of living is high, state agencies must pay more for workers or office space to provide services. In the following subsections, we describe in detail how we developed a cost index to reflect differences in costs for these two types of inputs, labor and office space. There are many other resources used to provide VR services, such as equipment and supplies, but data are not readily available on them. Obtaining such data would be time-intensive and costly, requiring detailed surveys of the specific services that each VR agency provides and the particular resources that go into each service. As a result, our cost index may not capture differences in the cost of some key inputs. For example, due to lack of data, our index would not capture transportation costs, which could be higher in states that have geographically dispersed populations. While our index has some limitations, we believe it is a reasonable proxy that can reflect general differences across states in the cost of providing VR services. Because any cost index will only be an approximation of true cost differences, the index we developed is based on what we believe are reasonable assumptions that avoid overstating or exaggerating cost differences among states. We believe our measure allows us to at least partially recognize real cost differences among the states, while avoiding inappropriate incentives. A similar cost of services index is used in the funding formulas for the Community Mental Health Services and the Substance Abuse Prevention and Treatment block grants. In the following subsections, we describe: (1) our work to identify a data source for estimating wages in each state, (2) the data source we used to estimate rental costs in each state, (3) our methodology for estimating how much to weight wages and rents in the cost index, and (4) how we combined our weights with the data on wages and rents to develop a cost of services index for each state. Since the purpose of our cost index is to help distribute federal VR funds among states, we did not examine cost differences between agencies for the blind and general VR agencies. In states with two VR agencies, the state determines how to divide the federal grant allocation among the agencies. A measure of state labor costs should reflect the wages of all types of workers potentially involved in the VR program, including those directly employed by state VR agencies, as well as those employed by public or private-sector organizations that VR agencies have contracted with to provide VR services. To obtain an understanding of the types of workers who may be involved in state VR programs, we first reviewed data from Education’s Annual Vocational Rehabilitation Program/Cost Report (RSA- 2), which provides information on state agencies’ expenditures on various types of services, both those provided by state agency employees and those provided through contracts or purchases from other organizations. The data indicate the proportion of expenditures state agencies spent on the following types of services in fiscal year 2007: 7 percent on postsecondary education 14 percent on occupational and vocational, job readiness, and all other 13 percent on assessment, counseling, guidance, and placement 7 percent on diagnosis and treatment of physical and mental impairments 4 percent on rehabilitation technology 1 to 2 percent each on other types of services, such as income support, transportation, and personal assistance services Once we obtained information on the types of services that the VR program provides, we reviewed sources of data from BLS on average annual wages in each state for various industries and occupations, as well as wage data from the Centers for Medicare and Medicaid Services (CMS) that are used to adjust payments to healthcare providers. Specifically, we examined BLS data from the Quarterly Census of Employment and Wages (QCEW) and Occupational Employment Statistics (OES). The QCEW data, which come from employer filings for unemployment insurance, cover nearly all civilian employment. It classifies wages and employment levels by industry, using the North American Industry Classification System (NAICS). We examined private sector wages, but also included public sector wages for state government employees. In addition, we also reviewed data on specific occupations related to the VR industry from the OES. The OES classifies occupations according to the Standard Occupational Classification (SOC) system. Finally, we examined wage indices used to allocate funds in the Medicare program for skilled nursing and inpatient rehabilitation facilities. The specific data series we reviewed are listed below: BLS, Quarterly Census of Employment and Wages Vocational rehabilitation services industry, private sector (NAICS 6243) Social assistance industries, private sector (NAICS 624) Healthcare and social assistance industries, private sector (NAICS 62) Education, healthcare, and social assistance industries, private sector (QCEW industry code 1025) Service-providing industries, private sector (QCEW industry code 102) State government sector (QCEW industry code 10, state government) BLS, Occupational Employment Statistics Rehabilitation counselors (SOC 21-1015) Substance abuse and behavioral disorder counselors (SOC 21-1011) Educational, vocational, and school counselors (SOC 21-1012) Community and social services occupations (SOC 21) Wage indices for Medicare’s Prospective Payment System We determined that the data for the most narrowly-defined industries and occupations— the QCEW data for the vocational rehabilitation services industry, and the OES data on rehabilitation counselors—were less suitable for use in a cost index after examining the data and speaking with officials from BLS. The QCEW data on the vocational rehabilitation industry showed some peculiar values. For example, the wages in Vermont were the highest in the nation in the vocational rehabilitation services industry, but they were not among the highest in other data series we examined. We contacted BLS officials to better understand these data. They informed us that Vermont’s data for the vocational rehabilitation services industry come from a small number of employers, and as a result, could be affected by two factors: possible differences in the types of work performed and in the number of hours worked per week. The average annual wage in the QCEW is calculated by including both employees who worked full-time and part-time. If the proportion of VR employees working part-time varies substantially across states, this could cause state annual wages to vary. At a broader industry level, this is less likely to be a problem because the data cover more employers and employees. With regard to the OES data on rehabilitation counselors, the data were incomplete. There were no published wages for Alaska and Utah. The remaining data series could serve as reasonable proxies for state wages in the VR program, but we selected the QCEW data for the education, healthcare, and social assistance industry sector for our proxy because it covers the wide array of services that the VR program provides. These include training, healthcare-related services, and social services. In addition, this industry had the smallest range for wage differences across states—the state with the lowest wage in this industry was 17 percent below the average, and the state with the highest wage was 25 percent above average. As a result, compared to the other industries, the education, healthcare, and social assistance industry would produce the more conservative results. Table 4 shows the median, minimum, and maximum values for each of the data series we examined for which data were available for all 50 states and the District of Columbia. To compare wages across states and across data series, we used wage indices. A value of 1 is equal to the national average. Values greater than 1 are above the national average, and values less than 1 are below the national average. Table 5 presents correlations of the various wage indices. It shows that the various wage data we reviewed are correlated with each other, which suggests that the different data series would generally produce similar results in funding allocations. The vocational rehabilitation services industry (NAICS 6243), in the first row, has the lowest level of correlation with the other indices. Its highest correlation coefficient is 0.63 with the social assistance industry (NAICS 624), while no two other wage indices have a correlation coefficient less than 0.65. The distribution of values for the education, healthcare, and social assistance wage index is shown in figure 5. Values for more than half of the states lie between 0.9 and 1.1, which suggests that most states have similar wages in this industry sector. State-by-state data on the cost of office space are not available. As a result, we used as a proxy residential rental rates. The Department of Housing and Urban Development (HUD) annually collects the rental cost of housing for 530 metropolitan areas and 2,045 non-metropolitan counties across the nation. These Fair Market Rents (FMR) data are used by several programs to set housing subsidies. The FMR data are also used in Medicare’s physician fee schedule as a measure of office rents. Since the FMR provides data on a local level, we aggregated the data to statewide averages and used an index to compare rental costs across states. The distribution of rents is shown in figure 6. Similar to figure 5, a value of 1 is equal to the national average. Values greater than 1 are above the national average, and values less than 1 are below the national average. Unlike the wage index, the rental cost index shows that over half of the states have a rental index of 0.85 or less, while 11 states have a rental index of 1.15 or higher. This suggests that rental costs vary substantially among states. To determine how much to weight the wage and rental costs, we first surveyed state VR agencies to learn what proportion of their fiscal year 2007 and 2008 expenditures was spent on wages and rents. Because many agencies contract out or purchase services, our survey asked them first to report how much of their expenditures were for contractors or purchased services, and how much was spent in-house. Then we asked them to report how much of their in-house expenditures went to wages and rents. We used these responses to compute average proportions, which we then applied to categories of expenditures using RSA-2 data. In doing so, we assumed that the proportions of expenditures on wages and rents that state agencies reported for in-house expenditures was the same as the proportions for contracts and purchased services. With regard to the RSA- 2 data, we examined state VR agencies’ spending in three broad categories: administration, individual services, and group services. Table 6 shows the results of our analyses of the survey responses and the RSA-2 data, as well as how we combined these results to develop the weights for wages, rents, and other inputs in the cost index. In table 6, the first column shows the expenditure categories from the RSA-2 data. The second column shows the average proportion of expenditures that went to administration, individual, and group services, according to the RSA-2 data. The third through fifth columns show our survey results on the average proportion of expenditures that went to labor costs, rents, and other inputs. We assumed that for administration and individual services, expenditures went to labor, rents, and other inputs in the precise proportions that the survey results suggested. For example, we assumed that 69 percent of expenditures for administration and individual services were spent on wages, 5.6 percent on rents, and the remainder for “other,” such as materials and supplies. However, group services can include activities such as construction of a community rehabilitation program. It is not clear what comprises these services. As a result, its input costs were assigned entirely to the “other” category. Finally, the last three columns multiply the prior columns, and the sum of the columns yields some preliminary weights. Once we obtained the results shown in table 6, we rounded the weights to the nearest 5 percent. Ultimately, we estimated the weights to be 0.65 for wages, 0.05 for rents, and 0.30 for all other inputs. We then constructed the index, using the following formula: Cost index = 0.65 Wages + 0.05 Rent + 0.30 Other With this formula, we calculated a cost index for each state, using each state’s average wage from 2005 to 2007 in the education, healthcare, and social assistance sector, according to QCEW data, and its average rental cost from fiscal years 2007 to 2009, according to the FMR data. The cost for “other” inputs besides wages and rents was assigned a weight of 0.30 and assumed to be constant for all states. We made this assumption to simplify the construction of the cost index because we were unable to readily or reliably capture differences in the costs of these inputs among states. As noted earlier, estimating these costs would be difficult because identifying the various materials and supplies used to provide the wide variety of VR services would be costly and labor-intensive. In addition, it is unlikely that there would be nationally available data on the costs of any materials or supplies we could identify. The assumption that the cost of “other” inputs is the same across states may be reasonable because some materials and supplies are likely to be purchased by state VR agencies from a national market and, therefore, the geographical variation in these costs would be limited. Figure 7 shows the distribution of the cost index. If a state’s cost index is 1, its costs are estimated to be the same as the national average. If a state’s index is greater than 1, its costs are estimated to be above the national average. Finally, if a state’s index is less than 1, its costs are estimated to be below the national average. Values for 36 of the states lie between 0.9 and 1.1. See appendix IV for a listing of the cost index for each state. We had our work on the cost of services index reviewed by three external experts in the disability field. They generally concurred with our methodology for developing the cost index using existing data. However, each noted that our index does not capture certain key inputs or costs underlying the provision of VR services, such as higher education, transportation, contracted services, and tax rates. We generally agree that it would be preferable to reflect all key cost differences that affect the provision of VR services; however, to do so requires reliable data from VR agencies (to determine average costs and develop weights for each input), and from independent sources (to estimate cost differences for those inputs among states). Either one or the other, or both, were not readily available. For example, with respect to transportation costs, data were not readily available from VR agencies that would allow us to develop average costs and weights, and we are not aware of independent data on transportation cost differences among states. Similarly, data were not readily available from VR agencies to identify the specific services that agencies contracted for and to determine the inputs that are used to provide those services. Finally, one of the experts noted that high tax rates in certain states may result in higher wages and rents, and suggested we incorporate states’ tax rates into our cost index. We agree that tax rates may influence wages and rents; however, research suggests that many factors affect state differences in wages and rents, and determining the relative significance of state tax rates would be a highly complex analytical undertaking. In summary, although data were not readily available to reliably reflect cost differences for additional inputs suggested by our experts, these experts generally agreed that we appropriately accounted for cost differences in two basic inputs, i.e., wages and rents. A taxpayer equity standard stipulates that funds are distributed so that states can provide individuals comparable services using both state and federal funds, while each state contributes about the same proportion of their resources to a given federal program. This equity standard requires a formula to include an indicator of each state’s ability to finance a given program from its own sources. In a funding formula, a good indicator of a state’s financing ability would measure all types of taxable resources and would not be affected by an individual state’s actual fiscal decisions. We used Total Taxable Resources (TTR), as reported by Treasury, to measure state resources. The Treasury, as required by federal law, provides annual estimates of TTR in order to estimate states’ financing ability. The estimates are used in formulas to allocate federal funds among states for the Community Mental Health Services and the Substance Abuse Prevention and Treatment block grants. TTR is a more comprehensive measure of financing ability than per capita income because it includes personal income received by state residents, types of corporate income and capital gains that per capita excludes, as well as income produced within a state that is received by individuals who reside out-of-state. For our first objective, we reported state-level information on the proportion of the population of working age with a disability, allotments per person with a disability (adjusting for costs), and ratio of per capita income to total taxable resources. In addition, we analyzed the extent to which state per capita income is correlated with state disability rates in order to determine whether the current use of per capita income in the funding formula adequately accounts for a state’s need population. Finally, we examined whether agencies in states with below median levels of funding per person with a disability were more often on an order of selection, compared to states above the median. For each of the above analyses, we examined only the 50 states and the District of Columbia. We did not analyze the U.S. territories because complete data were not available on the territories from the various data series we used. This section describes each of the analyses we conducted for our first objective. Proportion of states’ population that is civilian, of working age, and with a disability: To determine the proportion of each state’s general population that is civilian, of working age, and has a disability, we analyzed 2007 ACS data to obtain the number of civilians in each state from age 16 to 64 who responded “yes” to at least one of five disability questions. As discussed above, we did not include individuals who responded “yes” to the question on difficulty working because this question was eliminated from the ACS, starting in 2008. We then divided the working-aged disability population numbers by the total population (all ages) for each state, which we also obtained from the 2007 ACS. These proportions are presented in appendix III. Correlation between states’ disability populations and their per capita income: To test whether the formula’s use of per capita income is a reasonable proxy for states’ disability rates, we analyzed the correlation between their disability rates and their “allotment percentage,” which is the part of the formula that includes per capita income. As discussed above, we determined a state’s disability rate as the proportion of a state’s total population that is civilian, of working age, and has a disability. For the per capita income factor, we calculated each state’s “allotment percentage,” which according to the formula, is one minus one-half of a state’s per capita income level divided by the national per capita income level (see appendix II for a further explanation of the allotment percentage). We then obtained the correlation coefficient between states’ disability rates and their allotment percentages. Allotments per working-aged person with a disability (cost-adjusted): To determine for each state the allotment per working-aged person with a disability, as adjusted for the costs of providing services, we used Education data on the VR grant allotments that states received in fiscal year 2008, ACS data on state disability populations in 2007 (using the five disability questions, as described earlier), and the cost index, as described earlier. Specifically, we used the allotments that Education initially calculated for each state for fiscal year 2008. To calculate the allotments per working-aged person with a disability while adjusting for costs, we divided each state’s grant allotment by the product of the cost index and the state’s 2007 working-aged disability populations. See appendix V for a state-by-state listing of allotments per working-aged person with a disability, as adjusted for costs of services. We tested the reliability of Education’s data on VR grant allotments by replicating Education’s formula calculations and interviewing Education officials knowledgeable about the data. Our replications of the formula calculations produced results that were virtually identical to Education’s. As a result, we determined that the data are sufficiently reliable for our purposes. Order of Selection Status: We examined whether agencies in those states with below median allotments per working-aged person with a disability (adjusting for costs) more often reported being under an order of selection than those states whose allotments were above the median. To obtain information on states’ order of selection status, we used Education’s RSA- 113 data, which are quarterly data that states submit on their caseloads. For states with two VR agencies, we considered a state to be under an order of selection if either of its agencies reported being under an order of selection. Table 7 shows the number of states that we considered under an order of selection, by type of VR agency. We assessed the reliability of RSA-113 data by interviewing Education officials knowledgeable about the data and conducting edit checks. Education officials informed us that when a state agency reports being under an order of selection, the Department verifies that the state agency has documented in its state plan its intention to provide services on an “order of selection” basis. However, Education officials also informed us that the RSA-113 data on a state’s order of selection status do not necessarily indicate whether state agencies are currently operating on this basis by actively limiting services to individuals. For example, they noted, and we subsequently confirmed through our own review of the data, that some states reported being under an order of selection, but reported having no individuals on a waiting list. As a result, we determined that the RSA-113 data on order of selection was sufficiently reliable to provide information on the number of states reporting they were on an order of selection, but we cannot say whether these states were actually operating under their order. Our analysis also did not allow us to conclude whether there is any causal link between states’ funding levels and their order of selection status. Comparison of per capita income and total taxable resources: We analyzed how per capita income compares with TTR in each state. To do this, we obtained data on per capita income from the Department of Commerce and TTR data from Treasury from 2004 to 2006, the latest years for which data from both sources were available. We took 3-year averages (2004, 2005, and 2006) of the per capita income and total taxable resources levels for each state in order to limit the effects of any year-to-year fluctuations. We then calculated the total taxable resources per capita for each state by dividing the average total taxable resource amount by the state’s average population from 2004 to 2006. Next, we created indices by dividing each state’s per capita income and total taxable resources per capita by the U.S. averages of per capita income and total taxable resources per capita, respectively. To compare states’ per capita income with their total taxable resources, we divided each state’s per capita income index by its index of total taxable resources per person to obtain ratios. See appendix VI for a state-by-state listing of indices of per capita income and total taxable resources per capita, as well as their ratios. For our second objective, we developed three formula options based upon equity standards commonly used to design and evaluate funding formulas. See appendix VII for detailed descriptions of the formula options. We also estimated the grant allotments that each state would receive under each formula option, using data on states’ disability populations from the ACS, our cost index, and TTR data from Treasury. See appendix VIII for a table of our estimates of the grant allotments. Specifically, we used for states’ need populations, the average of their 2006 and 2007 populations of people of working age with a disability in order to limit the effects of any year-to- year fluctuations. As described above, we used the five disability questions from the ACS. As a measure of cost of services, we used the cost index that we developed, also described above. As a measure of state resources, we used the average of states’ total taxable resources from 2004 to 2006. We conducted a Web-based survey of VR agencies in states, territories, and the District of Columbia to gather information on agency officials’ opinions regarding the current formula, potential modifications to the formula, and the incorporation of performance incentives into the formula. In addition, we used the survey to obtain data on agency expenditures that we needed to develop our cost index. The Web-based survey was conducted using a self-administered electronic questionnaire posted on the Web. This Web-based survey was compatible with computer software that makes Web sites accessible to people with visual impairments. We received completed surveys from 74 of 80 VR agencies, for a response rate of 93 percent. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize nonsampling errors. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can all introduce unwanted variability into survey results. To minimize such nonsampling errors, a social science survey specialist designed the initial questionnaire, in collaboration with GAO staff who had subject matter expertise. The draft questionnaire was pretested with officials from 5 state VR agencies to ensure that the questions were relevant, clearly stated, and easy to comprehend. When the data were analyzed, an independent analyst checked all answers using a statistical program. Since the survey was a Web-based survey, respondents entered their answers directly into the electronic questionnaire, thereby eliminating the need to have the data keyed into a database and avoiding data entry errors. See appendix IX for selected responses to our survey. To learn more about state agency officials’ opinions on all three of our research objectives, we spoke with VR agency officials from 3 states when we designed our methodology and we followed our survey work by interviewing officials from 8 agencies in 6 states. In selecting these 8 agencies, we identified states with a diversity of characteristics in terms of their: (1) disability population rates; (2) per capita income levels; (3) geographic dispersion; and (4) order of selection status. In addition, we sought to interview both state agencies that serve individuals with a wide variety of disabilities and agencies that primarily serve blind individuals. In 5 states, we also spoke with representatives from the state rehabilitation councils, which are advisory councils for state VR agencies. We also spoke with officials from: (1) Education’s Rehabilitation Services Administration to obtain relevant programmatic data and perspectives on the VR program; (2) SSA regarding data on the population receiving Social Security disability benefits; (3) the Census Bureau regarding ACS disability data; (4) BLS regarding data on wages; and, (5) the Department of Transportation regarding the use of Census data on disability that was used in a formula to distribute funds for the Department’s New Freedom program. In addition, we conducted about a dozen interviews with researchers having expertise in disability data and the VR program and with advocacy groups. We also attended the fall 2008 conference of the Council of State Administrators of Vocational Rehabilitation (CSAVR) to learn more about matters of interest to VR stakeholders and to obtain the views of the council’s executive committee members. Also, we spoke with representatives from four private sector companies that provide vocational rehabilitation services, in order to learn about their experiences with performance incentives in the private sector. We chose these four companies based on the recommendations of a trade organization official familiar with the fields of disability insurance and private vocational rehabilitation. To identify issues to consider when incorporating performance incentives into the VR formula, we reviewed literature produced by academic experts, think tanks, and government agencies such as the Congressional Research Service. We also reviewed prior GAO studies dealing with performance accountability in government programs. We identified relevant literature by reviewing research databases, such as EconLit and the Education Resources Information Center (ERIC). We were also referred to literature through citations in other literature and by the recommendations of GAO staff and the external experts we interviewed. In conducting our search and review, we endeavored to collect a diverse body of literature that offered different views about the use of incentive awards. Aside from conducting a general review of literature on performance incentives, we also identified and reviewed literature specific to four federal programs in order to understand their experiences and identify issues related to providing incentive awards. These programs are the Workforce Investment Act (WIA), Child Support Enforcement (CSE), Public Housing Capital Fund, and the Job Training Partnership Act (JTPA) programs. The latter was discontinued and replaced by WIA Title IB programs, authorized in 1998. We also obtained the views of federal officials in the Departments of Labor, Health and Human Services, and HUD, which are responsible, respectively, for the WIA, CSE, and Capital Fund programs. The current VR funding formula allocates federal funds to states annually, based on three factors: (1) the amount of federal funds they received for their VR program for fiscal year 1978, (2) their population size, and (3) their per capita income level, as compared with the national per capita income. States’ fiscal year 1978 allotments became part of the formula when it was revised through a 1978 amendment to the Rehabilitation Act. This provision ensured that no state experienced a funding decrease with the formula change. As currently constructed, the formula first provides states with the amount of federal funds that they were allotted for their VR program in fiscal year 1978. The allotment percentage is designed to be higher for poorer states. For example, a state that has a per capita income level equal to the national level will have an allotment percentage of 0.50. If a state’s per capita income is lower than the national level, its allotment percentage will be above 0.50. If a state’s per capita income is higher than the national level, its allotment percentage will be lower than 0.50. However, to mitigate the influence of per capita income for states with very high or very low per capita income levels, the Rehabilitation Act sets both a floor and a ceiling on the allotment percentage—it cannot be less than 33 1/3 percent or greater than 75 percent. Further, the allotment percentage is set at 75 percent for U.S. territories and the District of Columbia. Federal law requires the Department of Education (Education) to calculate the allotment percentages in even-numbered years, using the average of the three most recent years of available data on per capita income. Education obtains the data from the Department of Commerce’s Bureau of Economic Analysis. If a state’s allotment is calculated to fall below this amount, its allotment is increased to that level, and the allotments of other states are decreased proportionately. Table 8 below shows the proportion of each state’s total population that is civilian, of working age (16 to 64), and with a disability. See appendix I for further information on our data source and methodology for determining these proportions. Table 9 provides a cost index for each state, designed to estimate the differences among states in the cost of providing VR services. The index is a weighted average of the costs of two primary resources needed to provide VR services, labor and office space. See appendix I for further information on the development of our cost index. The average cost nationally is represented by an index of 1.00. A state with an index above 1.00 is estimated to have costs greater than average, while a state with an index below 1.00 is estimated to have costs less than average. For example, with a cost index of 0.95, Alabama is estimated to have costs 5 percent below the national average. Table 10 shows the amount of federal funds each state received for fiscal year 2008, and the amount of services each state would be able to purchase per working-aged person with a disability with those funds. The per person allotments were adjusted to take into account differences among states in the cost of wages and rents, using the cost index shown in appendix IV. See appendix I for more information about our data sources and methodologies. Table 11 illustrates the difference between each state’s financial resources as measured by per capita income and TTR. The second column shows, for each state, its per capita income indexed to national per capita income, and the third column shows each state’s TTR per capita indexed to national TTR per capita. These indices are based on averages of three years of data from 2004 to 2006. They were created by dividing each state’s three-year average by the national average. States with income or TTR per capita levels that are higher than the national averages have indices that are greater than 1, and states with levels that are below the national averages have indices that are less than 1. For example, the TTR per capita index for Alabama is 0.79, meaning that the state’s TTR per capita is 21 percent below the national average. The final column in the table shows the ratio of each state’s per capita income index to its TTR per capita index. States in which the formula’s use of per capita income understates its potentially taxable resources have ratios that are less than 1. States in which the use of per capita income overstates its potentially taxable resources have ratios above 1. For example, the ratio is 0.80 for Alaska, meaning that the formula’s use of per capita income understates Alaska’s taxable resources by 20 percent. See appendix I for further information regarding our analysis of per capita income and TTR. This appendix provides detailed information on three formula options or prototypes for revising the VR funding formula: (1) a partial beneficiary equity formula that distributes funds based only on the size of a state’s population potentially needing services, (2) a full beneficiary equity formula with the addition of a cost of services factor, and (3) a taxpayer equity formula with the addition of a measure of state resources. ∑ Need population = sum of the need population across states, or the total need population nationally This formula would allocate funds based on each state’s share of the total need population nationally. It would only partially achieve beneficiary equity because it does not account for differences among states in the cost of providing services. Cost adjusted need population = Need population x Cost index ∑ Cost adjusted need population = Sum of the cost adjusted need population across states This formula would achieve full beneficiary equity because it accounts for both states’ need populations and costs of providing services. The cost index in the formula estimates each state’s cost of providing services. Cost adjusted need population x allotment percentage Cost adjusted need population x allotment percentage Cost adjusted need population = Need population x Cost index Allotment percentage = 1- 0.20 ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ This formula would achieve taxpayer equity by basing allotments on a state’s need population, adjusted for the cost of providing services, and its ability to fund program services. In this option, the formula includes an “allotment percentage” to account for a state’s ability to contribute funding to the VR program. A state with fewer taxable resources compared to other states would have a larger allotment percentage and, therefore, a larger final allotment (all else being equal). “TTR” is used to indicate a measure of state’s financing ability, since we regard Treasury’s Total Taxable Resources (TTR) data to be a comprehensive measure of a state’s taxable resources. The 0.20 in the allotment percentage equation indicates that, nationally, states’ required contribution to the VR program is approximately 20 percent. If the matching requirement were to vary for each state, then an individual state’s matching rate would simply be determined by its allotment percentage. Table 12 shows the allocations for each state and the percentage change from their fiscal year 2008 allocations under the three formula options (which are described in detail in appendix VII). For each of these options, we retained the minimum allotment that the current formula provides, 1/3 of 1 percent of the total federal funds appropriated to the VR program, or $9,463,837 for fiscal year 2008. As part of our study, we distributed a Web-based survey to all 80 VR agencies in the states, territories, and District of Columbia to obtain agency officials’ views regarding the current formula, potential modifications to the formula, and the possibility of incorporating performance incentives into the formula. In addition, we used the survey to obtain data on agency expenditures that were needed to develop our cost index. We received completed surveys from 74 of 80 VR agencies, for a response rate of 93 percent. The following figures show responses to all closed-ended questions, except for those questions concerning agency expenditures, which are discussed in appendix I. For more information about our methodology for designing and distributing the survey, see appendix I. Daniel Bertoni, Director, (202) 512-7215 or [email protected]. Michele Grgich, Assistant Director; Yunsian Tai, Analyst-in-Charge; Gregory Dybalski, Thanh Lu; Karine McClosky; and Barbara Steel-Lowney made significant contributions to all aspects of this report. Susan Bernstein provided writing assistance. Other advisers included Joanna Chan, Robert Dinkelmeyer, Ronald Fecso, DuEwa Kamara, Stuart Kaufman, Jacqueline Nowicki, Patricia Owens, Max Sawicky, and Roger Thomas.
|
State vocational rehabilitation (VR) agencies play a crucial role in helping individuals with disabilities obtain employment. In fiscal year 2008, the Department of Education (Education) distributed over $2.8 billion in grants to state agencies, using a funding formula that was last revised in 1978. Questions have been raised about whether this formula is outdated, allocates funds equitably, and adequately accounts for state agencies' performance. GAO was asked to: (1) examine the extent to which the current formula meets generally accepted equity standards, (2) present options for revising the formula, and (3) identify issues to consider with incorporating performance incentives into the formula. To address these objectives, GAO relied upon two equity standards commonly used to design and evaluate funding formulas: beneficiary equity, which stipulates that funds should be distributed so that each state can provide the same level of services to each person in need; and taxpayer equity, which stipulates that states should contribute about the same proportion of their resources to a given program. GAO analyzed data from Education, Department of the Treasury, Census Bureau, and other agencies; surveyed state VR agencies; interviewed agency officials and disability experts; and reviewed literature on performance incentives. The VR funding formula falls short of meeting equity standards because it uses imprecise measures of state needs and resources. The formula does not account for differences among states in the proportion of people with a disability or the costs of providing services. As a result, the amount of services that states can purchase per person with a disability varies, from $83 to $277 (see figure). In addition, the formula uses only per capita income to measure a state's ability to contribute to the program, excluding other taxable resources. GAO presents three options for revising the formula to illustrate a range of possibilities: the first distributes funds based on states' disability populations, the second also accounts for costs of providing services, and the third further accounts for state resources beyond per capita income. Because any formula change would redistribute funds among states, potentially disrupting services to individuals, GAO also presents options for establishing a transition period. Including performance incentives in the funding formula has potential for improving performance but can also pose challenges. These include: effectively balancing the VR program's multiple goals, rewarding agencies for meeting individuals' specific needs, and basing awards on an agency's performance rather than influences outside its control. GAO identified ways to mitigate these risks, such as using multiple performance measures to address different goals, and adjusting the performance level required for an agency to receive an incentive award. However, these approaches would still require careful consideration of several issues, such as how to account for clients' varying disability levels and needs and provide appropriate incentives for achieving desired outcomes.
|
BLM is responsible for issuing leases for oil and gas resources on and underneath BLM land, underneath other federal agencies’ land, and underneath private land where the federal government owns the mineral rights—amounting to roughly 700 million subsurface acres. Approximately 44.5 million of these acres are leased for oil and gas operations, of which about 11.7 million acres have active oil and gas production. In addition, BLM manages about 250 million federal surface acres, of which 472,000 acres have surface disturbance related to oil and gas production. To manage its responsibilities, BLM administers its programs through its headquarters office in Washington, D.C.; 12 state offices; 38 district offices; and 127 field offices. BLM headquarters develops guidance and regulations for the agency, while the state, district, and field offices manage and implement the agency’s programs. Because BLM has few acres of land in the eastern half of the United States, the Eastern States State Office, in Springfield, Virginia, is responsible for managing land in 31 states, while the remaining state offices generally conform to the boundaries of one or more states. Figure 1 shows the boundaries of the 12 BLM state offices. BLM has 48 field offices with an oil and gas program managed by 33 of its field offices that fall under the jurisdiction of 10 BLM state offices. The Idaho and Oregon BLM state offices do not contain a field office with an oil and gas program. Table 1 shows the number of wells managed by the 33 field offices and their associated BLM state office, as of May 26, 2010. The Federal Land Policy and Management Act of 1976 requires BLM to develop resource management plans, known as land use plans, which identify parcels of land that will be available for oil and gas development. BLM then offers for lease parcels of land nominated by industry and the public as well as some the agency identifies. The number of acres covered by a lease varies: the maximum number covered is 2,560 acres for leases in the lower 48 states and 5,760 acres for leases in Alaska. Similarly, the number of wells on a lease can also vary from 1 to more than 1,000, and well depths can range from a few hundred feet to more than 26,000 feet deep. Operators who have obtained a lease must submit an application for a permit to drill (APD) to BLM before beginning to prepare land or drilling any new oil or gas wells. The complete permit application package is a lengthy and detailed set of forms and documents, which, among other things, must include proof of bond coverage and a surface use plan. This surface use plan must include a reclamation plan that details the steps operators propose to take to reclaim the site, including redistribution of topsoil, configuring the reshaped topography, and seeding or other steps to re-establish vegetation. However, operators generally do not have to submit cost estimates for completing the reclamation. In reviewing the APD, BLM (1) evaluates the operator’s proposal to ensure that the proposed drilling plan conforms to the land use plan and applicable laws and regulations and (2) inspects the proposed drilling site to determine if additional site-specific conditions must be addressed before the operator can begin drilling. After BLM approves a drilling permit, the operator can drill the well and begin production. The Mineral Leasing Act of 1920, as amended, requires that federal regulations ensure that an adequate bond is established before operators begin to prepare land for drilling to ensure complete and timely reclamation. Accordingly, BLM regulations require the operator to submit a bond in order to ensure compliance with all of the terms and conditions of the lease, including, but not limited to, paying royalties, plugging wells, and reclaiming disturbed land. To ensure operators meet legal requirements, including reclamation, BLM regulations require them to have one of the following types of bond coverage: individual lease bonds, which cover all the wells an operator drills under one lease; statewide bonds, which cover all of an operator’s leases in one state; nationwide bonds, which cover all of an operator’s leases in the United States; or other bonds, which include both unit operator bonds that cover all operations conducted on leases within a specific unit agreement, and bonds for leases in the National Petroleum Reserve in Alaska.12 13 BLM accepts two types of bonds: surety bonds and personal bonds. A surety bond is a third-party guarantee that an operator purchases from a private insurance company approved by the Department of Treasury. The operator must pay a premium to the surety company to maintain the bond. These premiums can vary depending on various factors, including the amount of the bond and the assets and financial resources of the operator, among other factors. If the operator fails to reclaim the land they disturb, the surety company either pays the amount of the bond to BLM to help offset reclamation costs, or in some circumstances, BLM may allow the surety company to perform the required reclamation. A personal bond must be accompanied by one of the following financial instruments: certificates of deposit issued by a financial institution whose deposits are federally insured, granting the Secretary of the Interior authority to redeem it in case of default in the performance of the terms and conditions of the lease; cashier’s checks; negotiable Treasury securities, including U.S. Treasury notes or bonds, with conveyance to the Secretary of the Interior to sell the security in case of default in the performance of the lease’s terms and conditions; or irrevocable letters of credit that are issued for a specific term by a financial institution whose deposits are federally insured and meet certain conditions. Unit agreements refer to multiple lessees who unite to adopt and operate under a single plan for the development of any oil or gas pool, field, or like area. The amount of a unit operator bond is determined on a case-by-case basis by BLM officials, and the minimum amount of a National Petroleum Reserve in Alaska bond is set in regulation—not less than $100,000 for a single lease or not less than $300,000 for a reservewide bond (submitted separately or as a rider to an already existing nationwide bond). If the operator fails to reclaim the land they disturb, BLM redeems the certificate of deposit, cashes the check, sells the security, or makes a demand on the letter of credit in order to pay the reclamation costs. The regulations establish a minimum bond amount in order to ensure compliance with all legal requirements. As we reported in 2010, these minimum bond amounts were set in the 1950s and 1960s and have not been updated. Specifically, the bond minimum of $10,000 for individual bonds was last set in 1960, and the bond minimums for statewide bonds ($25,000) and for nationwide bonds ($150,000) were last set in 1951. BLM regulations require BLM to increase the bond amount when an operator applies for a new drilling permit who had previously failed to plug a well or reclaim land in a timely manner, resulting in BLM having to make a demand on a bond in the prior 5 years. For such an operator, BLM must require a bond in an amount equal to its cost estimate for plugging the well and reclaiming the disturbed area if BLM’s cost estimate is higher than the regulatory minimum amount. BLM regulations state that BLM officials may require an increase in the amount of any bond when the operator poses a risk because of factors that include, but are not limited to, a history of previous violations, a notice from ONRR of uncollected royalties due, or the total cost of plugging existing wells and reclaiming land exceeds the present bond amount according to BLM’s estimates. When a BLM field office determines that an increase in the bond amount is warranted, it forwards its recommendation to the BLM state office, which decides whether and how much to increase the bond amount. After production has ceased, the operator may delay performing reclamation and instead allow the well to remain idle for various reasons. For example, expected higher oil and gas prices may once again make the well economically viable to operate in the future, or the operator may decide to use the well for enhanced recovery operations. Enhanced recovery operations involve, for example, using the well to inject water into the oil reservoir and push any remaining oil to operating wells. Idle wells include: Temporarily abandoned wells. These are wells that are physically or mechanically incapable of producing oil or gas of sufficient value to exceed direct operating costs but may have value for a future use. Operators must receive BLM approval prior to placing a well in temporarily abandoned status for more than 30 days. This approval, which lasts for up to 12 months, can be renewed annually at BLM’s discretion. All temporarily abandoned wells must have current approval after the initial 30 days. Shut-in wells. These wells are physically and mechanically capable of producing oil or gas in quantities that are economically viable but that have not produced for 30 days. According to BLM officials, operators do not have to obtain BLM approval to place wells in shut-in status. Wells become orphaned if an operator does not perform the required reclamation and if the bond is not sufficient to cover well plugging and surface reclamation and there are no other responsible or liable parties to do so. This situation may occur, for example, when an operator has declared bankruptcy. For orphan wells, BLM uses the bond and appropriated funds as necessary to complete the reclamation. As we reported in 2010, according to BLM data, the agency spent a total of about $3.8 million to reclaim 295 orphan wells in 10 states from fiscal years 1988 through 2009. BLM also estimated that there were an additional 144 orphan wells in seven states that needed to be reclaimed, with an estimated cost of approximately $1.7 million for 102 of these wells. According to BLM officials, idle wells have the potential to create environmental, safety, and public health hazards if they fall into disrepair, and they are at greater risk than other wells for becoming orphan wells. Therefore, these officials told us that it is important to manage idle wells so that they do not become orphan wells. The Energy Policy Act of 2005 (EPAct 2005) directs the Secretary of the Interior, in cooperation with the Secretary of Agriculture, to establish a program to remediate, reclaim, and close idle or orphan oil and gas wells located on federal land. For the purposes of this requirement, the act defines idle wells to be those wells that have been nonoperational for 7 years or longer and for which there is no anticipated beneficial use for the well. Specifically, the program must, among other things, include a means of ranking idle or orphan well sites for priority in remediation, reclamation, and closure, based on public health and safety, potential environmental harm, and other land use priorities, and provide for the identification of the costs of remediation, reclamation, and closure from those providing a bond or other financial assurance required under state or federal law for an oil or gas well that is idle or orphan and provide for the recovery of those costs from those operators or entities providing the bond or other financial assurance or their sureties or guarantors. BLM has developed two policies—one for bond adequacy and one for idle and orphan wells—to manage the potential liabilities on federal land. First, the bond adequacy policy directs BLM offices to review bonds and increase amounts as necessary to ensure, among other things, that the bond amount reflects the risk posed by the operator. Second, BLM’s idle and orphan well policy, which implements EPAct 2005, directs field offices to review these wells and ensure they are either plugged and reclaimed or returned to production. BLM has established a bond adequacy policy that directs its field and state offices to periodically review bonds and increase the bond amounts as necessary. This policy is documented in three instruction memorandums (IM) sent to the BLM state offices administering an oil and gas program. The first of these IMs—IM 2006-206, issued in August 2006—directs each BLM state office administering an oil and gas program to establish an action plan. The goal of these plans is to develop a process to ensure the review of operations on federal oil and gas leases, including steps to increase bond amounts when necessary. Two subsequent IMs—IM 2008- 122, issued in May 2008, and IM 2010-161, issued in July 2010—continue and build on the bond adequacy policy established in IM 2006-206. Table 2 provides an overview of BLM’s oil and gas bond adequacy policy based on these three IMs. In summary, according to BLM policy, when the specified activities occur—for example, when record title or operating rights are transferred or when operators change—BLM field office staff must perform a review to determine whether the existing bond amount is adequate. When determining bond adequacy, BLM field staff are to take into account a number of factors, including, but not limited, to the following: Liabilities. Liabilities may include ponds containing excess water and other materials produced from the well, wells with significant actual or potential liabilities, surface production facilities, or other surface uses with significant reclamation liabilities. The policy states that it is important that idle wells be reviewed to identify potential problems and liabilities and to assess adequacy of existing bond amounts. A history of previous violations. Previous violations may include failing to comply with the lease terms and notices or orders issued by BLM, particularly with regard to the proper plugging and abandonment of wells or reclamation of the disturbed surface area. Unique or unusual conditions. Unique or unusual conditions may occur either in the planned drilling operations or in the surrounding environment that will make the operations potentially more hazardous or the potential for significant environmental damage resulting from an accident. Unpaid royalties. BLM receives a notice from ONRR that royalties are due. Costs higher than bond amount. As estimated by BLM field staff, the total cost of plugging existing wells and reclaiming land exceeds the bond amount. Taking these conditions into account, the policy gives broad discretion to BLM field office staff to determine if a bond is adequate or should be increased. For example, if, while performing a bond adequacy review, BLM field staff determine that the operator poses a risk because the cost of well plugging and reclamation exceeds the bond amount, BLM can require an increase to the existing bond to cover the potential liabilities. The policy also allows BLM field staff to reduce the amount of the bond if the potential federal liability is reduced, but not to a level below the regulatory minimums. When it has been determined that a bond amount is inadequate, BLM policy states that the bond may be increased to any amount specified b BLM staff. While the policy does not specify how the exact bond increase amount is determined, it stipulates that the bond amount should not be increased solely on the number of wells on the lease. Moreover, the bond amount is not to, in any circumstances, exceed the total of the estimated cost of plugging and reclamation, the amount of uncollected royalties due and the amount of monies owed to the federal government due to outstanding violations. BLM policy stresses that the judgment and ds to experience of its staff is paramount in deciding whether a bond nee be increased or is adequate. When BLM field staff determine that a bond , increase is warranted, BLM state office officials review the proposed increase and process or deny it. According to BLM’s 2006 and 2008 bond adequacy policies, BLM is “mindful” of the need to maintain an acceptable risk level, yet to not place an undue financial burden on operators. Industry officials told us that increasing bond amounts for small operators can be burdensome in that surety companies may be unwilling to provide small operators a surety bond without a financial audit of their business, which in some circumstances can cost the operator between $25,000 and $30,000. As a result, these officials told us that small operators frequently rely on personal bonds to meet BLM’s bonding requirements, which in some circumstances can further tie-up their already limited financial resources and impair their ability to perform the required reclamation. Further, BLM officials told us that in recognition of the potential burden on small operators, they may work directly with a small operator to develop and implement a plan for having the operator reduce their risk instead of requesting a bond increase. By issuing IM 2007-192 in September 2007, BLM established a program to rank all idle and orphan oil and gas wells on federal land, as required by EPAct 2005. According to BLM officials the review process established by this policy is used to manage potential liabilities. The policy directs the field offices to rank idle and orphan wells and makes the field offices responsible for using the priority rankings to take action to have the idle wells either plugged and reclaimed or returned to production or service and to have the orphan wells properly plugged and the surface reclaimed. Specifically, under this policy the field offices are to develop an inventory of the idle and orphan wells under their jurisdiction and then rank them for priority in remediation, reclamation, and closure based on factors such as public health and safety, sensitive environmental resource, and other land use priorities. In order to aid the field offices with this work, IM 2007- 192 provides guidance on procedures for determining priorities for plugging wells and reclaiming land. The policy contains factors for the field offices to use when determining the idle wells’ priority ranking: (1) the percentage of idle to active wells; (2) the number of years the well has remained idle; (3) environmental, safety, and public health concerns; and (4) sensitive environmental resource and other land use priorities, such as the intensity of recreational use of the land and whether the well is located in a significant wildlife area. These factors are each assigned a score and the total sum is used to determine whether a well has a high-, medium-, or low-priority ranking. BLM field office officials are then expected to work with well operators to either plug these wells or return them to production, or they may use this information to help support an increase in the bond amount. For orphan wells, field offices are to evaluate 14 factors to determine the well’s priority ranking. The following 13 individual factors have specific point scores associated with them; field office staff are to review each factor, assign a point score, and then total them. Well leaking at the surface. Well not leaking at surface—possible pressure. Well bore configuration. Age of the well. Presence of surface contamination. Presence of vessels containing fluid (e.g., storage tanks). Hydrogen sulfide (HS) concentration. Proximity to surface water. Proximity to water wells. Contaminated water wells. Proximity to residences or public buildings. Sensitive environmental resources and other land use priorities. Other environmental and safety concerns. The 14th factor used to determine an orphan well’s priority ranking is the cost for plugging and reclaiming the oil or gas well. The field office may provide an estimate of this cost or use the standard estimate of $5 per foot to determine the estimated cost. The policy then directs each field office to send to the BLM Washington Office its priority ranking of orphan wells. According to IM 2007-192, the Washington Office staff reviews the information, develops a nationwide priority listing for all orphan wells on federal land, and allocates available funds to properly plug and reclaim the surface of these wells. BLM’s idle and orphan well policy also directs each field office to develop an action plan for having these wells plugged or returned to production. The action plan is to include a timeline for when the field office expects to have its current inventory of idle wells properly plugged or returned to production. In addition, the action plan must discuss the field office’s plan to manage wells that become idle in the future. BLM field and state offices have not consistently or completely implemented BLM’s policies for managing the oil and gas wells on federal land to reduce the likelihood that BLM will need to pay for or perform reclamation. According to our analysis of 33 survey responses from 48 field offices, these offices did not (1) consistently conduct bond adequacy reviews in accordance with BLM policy and their reliance on professional judgment has resulted in varying interpretations of the policy’s criteria for increasing bond amounts, (2) consistently review all their idle and orphan wells or reduce their inventory of idle wells, and (3) develop consistent or complete plans for bond adequacy and idle and orphan well reviews. According to our analysis of the number of bond reviews reported by 33 survey respondents, the field offices do not always regularly review bonds and increase bond amounts as necessary. BLM policy calls for field offices to conduct a bond adequacy review when certain events occur. For example, 13 of the 33 survey respondents reported that they either did not conduct any reviews or did not know the number of reviews they conducted for fiscal years 2005 through 2009. For example, the Vernal Field Office did not conduct any bond adequacy reviews during this period. According to officials we interviewed in the Vernal Field Office, they did not have sufficient staff resources to conduct any bond adequacy reviews, in part because of a backlog in processing APDs. Table 3 shows the number of bond adequacy reviews conducted by BLM field offices from fiscal years 2005 through 2009. In addition, the total number of bond adequacy reviews conducted by the field offices varied substantially each year, from a low of 114 in fiscal year 2005 to a high of 693 in fiscal year 2008. BLM officials said that the fluctuation in the number of reviews is typically the result of a higher priority placed on other BLM activities such as completing APDs. As table 3 also shows, the bulk of the bond adequacy reviews was conducted by just a few field offices. In particular, Bakersfield and Miles City conducted 781, or nearly 45 percent, of the total reported 1,760 bond adequacy reviews conducted by the field offices included in our survey from fiscal years 2005 through 2009. Lack of resources was a recurring theme at many of the 16 field offices where we interviewed field office staff. For example, officials at 14 field offices reported that they face resource limitations for conducting bond adequacy reviews. Officials in seven field offices told us that their offices had not reviewed bonds in all instances called for in the policy, in part because of a lack of staff resources. Similarly, at 13 field offices, officials told us that they found it difficult to conduct bond adequacy reviews because BLM headquarters and state offices give priority to other work activities through annual work plans. We found that while many field offices use similar methodologies for evaluating bond adequacy and deciding whether to increase bonds, some apply other approaches. Some field office officials told us that they generally rely on their professional judgment to evaluate bond adequacy, as allowed by BLM policy. These officials said that they typically gather and evaluate a variety of information, such as the location and depth of the wells, the ratio of idle wells to active wells, and whether any money is owed by the operator for previous violations, as well as any other pertinent or unusual facts regarding the operator or the wells. They then use their professional judgment to decide (1) whether to keep the current amount of the bond; (2) work with the operator to reduce the risk the operator poses, for example, by developing a plan and a schedule to plug wells or bring wells back into production; or (3) increase the bond. Officials in two field offices stated that their offices took the following additional steps when making bond adequacy determinations: The Carlsbad Field Office evaluated the remaining oil and gas reserves available and the estimated remaining lifespan of the well—known as the production decline curve—to determine when the well might become idle. The Farmington Field Office developed an electronic spreadsheet for evaluating potential liability and automating the process of analyzing of data. The spreadsheet uses formulas to uniformly assess the number, depth, and production of wells; the bonding amount; the inflation rate; and operator compliance history. It then identifies whether there is a need to increase the bond and the amount of the bond increase. Officials in the Farmington Field Office told us that the goal is to use this spreadsheet to ensure that all operators are reviewed for bond adequacy at least once every 2 years. According to our analysis, BLM state offices, like the field offices, have varied in how they interpret BLM’s policy for deciding whether to approve an increase to a bond amount requested by a field office. As mentioned earlier, BLM’s 2006 and 2008 bond adequacy policies stated that “BLM is mindful of the need to maintain an acceptable risk level, yet not to place an undue burden on industry.” However, the current policy does not include this statement and none of the policies define the term “acceptable risk” or offer guidance on when to increase a bond. Instead, the policy instructs BLM to rely on the judgment of BLM state office officials in deciding when to increase a bond, and we found that these officials have varied in their approaches to increasing bond amounts. For example, BLM state office officials in Colorado, Montana, Utah, and Wyoming told us that they generally interpreted BLM policy as only allowing bond increases when the operator is not in compliance, among other things. In contrast, officials in BLM’s California State Office told us that in 2002 they broadly interpreted BLM regulations and policy as allowing them to increase bond amounts for all wells identified as a potential risk to the government. In coordination with the Bakersfield Field Office, they therefore raised the bond amounts on all but 40 of their more than 300 leases from regulatory minimum bond amounts to $20,000 for an individual bond and $75,000 for a statewide bond. In addition, in situations where an operator only has a nationwide bond, they require the operator to post a rider to the bond to cover their wells in California. This is because they were concerned that nationwide bonds could cover other wells and that other BLM state and field offices could draw upon them to reclaim orphan wells, leaving the wells in California without adequate bond coverage. BLM officials in the California state office were the only state level officials we interviewed who interpreted the policy in this manner. According to BLM headquarters officials responsible for overseeing the oil and gas program, the policy was intended to allow states some flexibility in making these decisions, and they have not done a comprehensive review of the implementation of the policy. According to our survey of BLM field offices, not all field offices have conducted reviews to identify wells that were idle or orphan on an annual basis as specified in BLM policy. In particular, 11 of the 33 survey responses indicate that many field offices had not conducted reviews for at least one of the years from fiscal year 2005 through fiscal year 2009, and 16 responses indicate that several did not know how many wells they had reviewed for at least one of the years during this period. Furthermore, three field offices—Bakersfield, Farmington, and Worland—had reviewed 76 percent of the 15,660 wells reviewed by the field offices included in our survey. Table 4 shows the number of wells each field office reviewed from fiscal years 2005 through 2009. BLM field office officials cited a number of reasons why field offices have not conducted reviews under the idle and orphan well policy. In addition to the shortage of resources and other higher priority work mentioned earlier, some officials told us that they do not have access to complete and accurate well data, which we believe can impact their ability to conduct well reviews. For example, officials in the Rock Springs Field Office in Wyoming told us that they rely on BLM’s AFMSS data to determine which wells are idle. However, BLM officials have told us that well status data in AFMSS are not routinely updated on the basis of production data, which makes it difficult to identify idle wells from these data. In contrast, field offices that rely on well data that are compiled and updated by the state in which they are located are more easily able to identify idle well status. For example, the Farmington and Carlsbad field offices in New Mexico have access to data compiled by the New Mexico Oil Conservation District, a New Mexico state agency that regulates oil and gas wells. According to officials we interviewed, these state data are more complete, accurate, and user-friendly than BLM well data. Finally, field offices we surveyed reported mixed results in getting wells plugged or returned to production. For example, while three field offices—Buffalo, Carlsbad, and Casper—had plugged or returned to production more than 100 wells since fiscal year 2007, several offices we surveyed had not reduced their number of idle or orphan wells. In addition, officials in the Newcastle Field Office reported that due to staffing issues they did not know how many wells had been plugged or returned to production since fiscal year 2007. Table 5 shows the actions taken by the field offices we surveyed to either plug or return wells to production since fiscal year 2007. Some of the field offices that had taken steps to have wells plugged or returned to production were assisted by state programs that create funds for plugging and reclaiming idle and orphan wells. For example, New Mexico has created an oil and gas well reclamation fund that is paid for by production taxes on oil and gas operations. Using this fund, the BLM Carlsbad Field Office has plugged 26 orphan wells from August 1995 to December 2008. Two of BLM’s 10 state offices (Alaska and Arizona) have not developed an action plan to help ensure that bonds are regularly reviewed for adequacy, as directed by BLM’s policy in IM 2008-122. In addition, two of the other eight state offices’ action plans—California and Wyoming—do not contain an element specified in the IM (i.e., indicates the steps to take when a bond increase is considered necessary). Some of the eight plans also vary in the elements they contain that would be useful to field offices in meeting the IM’s goals, according to our analysis and comments provided by officials at BLM field offices. For example, none of the state bond adequacy action plans we reviewed contained guidance for field offices on how to determine the amount a bond should be increased by, an element that some field office officials we interviewed said would be helpful. Table 6 shows the elements that we identified as contained in or missing from the BLM state office’s bond adequacy action plans. At the field office level, 22 of the 33 survey respondents reported that they did not have an idle well action plan as directed by BLM’s IM 2007-192. While 11 field offices reported that they had developed idle well action plans, our review of these plans indicated that the plans lacked elements that the IM states should be included, such as the timeline for having its current inventory of idle wells properly plugged or returned to production. Table 7 shows the elements contained in or missing from the BLM field office’s idle well action plans for the 11 field offices that had developed such a plan. According to most field office officials we interviewed, they face challenges in two interrelated areas for managing potential liability on federal land. First, BLM’s bonding system—including the minimum bond amounts and inconsistent interpretation of policy for increasing bond amounts—impairs BLM’s ability to manage potential liability. Second, limitations with BLM data restrict the agency’s ability to evaluate potential liability and measure agency performance in managing potential liabilities on federal land. At 15 of the 16 field offices, BLM officials we interviewed noted challenges associated with the agency’s outdated minimum bonding amounts. Specifically, the bond minimum of $10,000 for individual bonds was last set in 1960, and the bond minimums for statewide bonds and for nationwide bonds—$25,000 and $150,000 respectively—were last set in 1951. As we reported in 2010, if adjusted to 2009 dollars, these minimum amounts would be $59,360 for an individual bond, $176,727 for a statewide bond, and $1,060,364 for a nationwide bond. Figure 2 shows the current amounts set in 1951 and 1960 and these amounts adjusted to 2009 dollars. According to BLM officials we interviewed at 12 of the 16 field offices, these minimum bond amounts are inadequate for managing potential liability. This is because these minimum amounts may not be sufficient to serve as an incentive to encourage operators to comply with plugging and reclamation requirements and the cost to plug and reclaim a well site may far outweigh the value of the bond. For example, these officials told us that the cost to plug a well ranges from approximately $2.50 to $20 per foot depth of well and that wells can range from being just a few hundred feet deep to more than 26,000 feet deep. In addition, the reclamation costs can range from $200 to $15,000 per acre. In other situations, these officials noted, the cost to plug and reclaim a single well may cost more than $100,000, and it can cost more than $10,000 simply to get a work crew to the well site. Consequently, in most circumstances, a $10,000 individual lease bond is insufficient to cover the plugging and reclamation costs for one well, according to the officials we interviewed. Given these factors, a few field office officials we surveyed noted that some operators, particularly small, independent operators with wells producing only small amounts of oil and gas, may be inclined to declare bankruptcy and default on the bond rather than pay to properly plug and reclaim the well site. These officials also told us that raising bond amounts for these small operators may in some situations make matters worse because the operators may not be able to provide the higher bond amount, and BLM’s effort to seek a bond increase may only hasten their decision to declare bankruptcy. To avoid this situation, BLM officials told us that they must devote significant time and resources to supervising these small operators and persuading them to properly plug and reclaim their wells, which places a drain on staff resources. Officials at 12 of the 16 field offices reported that they had too few resources to effectively manage potential liability. Officials in 15 of the 16 field offices we interviewed said that they face challenges conducting bond adequacy reviews. They noted the following examples: BLM staff have a limited amount of time available for conducting bond adequacy reviews because a significant amount of staff resources are needed to complete the process—potentially months of work—and BLM places a higher priority on other activities, such as processing APDs. The criteria for increases outlined in IM 2008-122 are vague, creating ambiguity about whether a request for an increase should be submitted and whether it will be approved. The BLM state office process for reviewing requests for a bond increase can, in some circumstances, be time consuming—sometimes taking a year or longer before a decision is made, by which time the conditions supporting the request may have changed or worsened. The Bakersfield Field Office in California was the only field office that did not report facing challenges conducting bond adequacy reviews. This is because this office is the only field office that raised bond amounts above the regulatory minimums for the majority of its bonds. Our analysis of data reported by the officials we surveyed indicates that about one-third of the field offices requested bond increases and many of the increases requested by field offices have not been approved by state offices. According to our survey of field offices, 13 of the 33 survey respondents reported that they did not request any bond increases, and 3 survey respondents reported that they did not know the number of bond increases they had requested. The 17 survey respondents that did request bond increases from May 2008 through November 2010 sought 93 bond increases worth about $19 million, and BLM state offices approved 59 (63 percent) of these increases, worth about $7 million, or 38 percent of the total value of increases sought, as shown in table 8. We identified three limitations in BLM’s data systems that restrict the ability of BLM field and state offices to evaluate potential liability and measure performance of BLM field offices’ implementation of the agency’s policies: (1) incomplete bond information in AFMSS, (2) unreliable field office counts of the number of idle wells, and (3) incomplete AFMSS data on the number of reviews for bond adequacy and idle wells. According to our analysis of AFMSS data, of the approximately 93,000 individual wells recorded in the database, more than 22,500 (about 24 percent) lack an associated bond number. Consequently, BLM field office officials told us it is difficult to accurately determine the oil and gas wells covered by a particular bond. To address this problem, BLM has developed a process to link well data from AFMSS to bond data in the LR2000 Bond & Surety System—which contains data on the amount and value of bonds held by BLM—to provide field office officials conducting bond adequacy reviews with the number of wells covered by a particular bond. For this system to work, however, BLM field office staff must manually enter bond data into AFMSS, yet they are not required to do so. As a result, the data necessary to make this process useful for evaluating potential liability across field offices are incomplete. The link between AFMSS and the LR2000 Bond & Surety System could address this problem for future bond entries if BLM required field office staff to enter and maintain bond number data into AFMSS. In addition, BLM field office officials told us that it is difficult to evaluate which bonds are associated with which potential liabilities when the wells are covered by a statewide or nationwide bond because BLM field offices generally cannot access AFMSS data for wells in areas managed by other field offices. Therefore, the data available to them have been insufficient to fully evaluate all of the potential liabilities that are covered by a particular statewide or nationwide bond. The link between AFMSS and the LR2000 Bond & Surety System will not fully address this problem since the link only shows the number of wells associated with a bond, and does not show further details about the wells, such as their production status, age, or condition. The 33 BLM field offices that responded to our survey reported a total of about 2,300 idle wells that had been inactive for 7 years or more as of fiscal year 2009. However, according to our analysis of ONRR’s Oil and Gas Operations Report (OGOR) data system, into which operators input monthly oil and gas production quantities, the number of idle wells on federal land under the jurisdiction of these offices is nearly double what was reported to us by BLM field offices—about 4,600 wells. Table 9 shows the number of wells we identified as inactive for 7 years or more using OGOR data and the number of idle wells reported by the 33 BLM field offices. BLM field offices did not question the accuracy of the OGOR data, but they noted three challenges associated with identifying idle wells that are likely the cause of the discrepancy. First, information on idle well status may be incomplete because BLM policy allows operators to keep wells in a shut-in status without alerting the field office that the well is not producing. According to our analysis of AFMSS data, about 1,600, or 32 percent, of the idle wells we identified from OGOR data are in shut-in status. However, this number may actually be higher because operators do not have to notify BLM when they place their well in shut-in status. As a result, BLM officials cannot solely rely on AFMSS data to accurately identify all wells that have been idle for 7 years or more, as directed by EPAct 2005 and BLM policy. Second, until October 1, 2010, a process BLM had developed in 2007 to link BLM’s well data in AFMSS with oil and gas production data from ONRR’s OGOR reports had not worked properly, according to BLM officials. OGOR data are based on information directly entered into an ONRR database by oil and gas operators. ONRR staff conduct data reliability checks and make corrections as appropriate. BLM officials told us that the problems with the linkage were addressed on October 1, 2010. However, even with this link between OGOR and AFMSS, it is still difficult to reconcile the data from the two systems. Reconciliation requires staff to manually compare well status data in AFMSS with production data from OGOR. Officials in BLM field offices told us that this process can be time consuming, depending on the total number of wells that must be reviewed, and can involve months of work looking through significant quantities of data. Third, according to some BLM field office officials, while some field offices have access to other more reliable sources of production data gathered by state conservation commissions and the global energy information provider IHS, these data sources also have limitations. For example, they generally do not distinguish between wells on federal, state, and private land. To use these data, field offices typically have to reconcile the wells from the state and IHS data with federal well numbers contained in AFMSS to identify those wells overseen by BLM. In addition, OGOR data show that BLM has a significant number of long term idle wells—which BLM officials told us pose the greatest risk for causing environmental degradation. Our analysis of OGOR data as of July 7, 2010, shows that of the approximately 5,100 wells idle for 7 years or longer, roughly 45 percent, or about 2,300 wells, have not produced oil or gas for more than 25 years. Figure 3 shows the total number of idle wells calculated using OGOR data, by the number of years they have been idle. 6 years (450) 5 year (490) 4 years (694) 3 years (962) 2 years (1,380) 1 year (1,706) Field offices vary in how they use AFMSS to record bond adequacy reviews; as a result, AFMSS data on the number of reviews for bond adequacy are inconsistently entered and therefore incomplete. This makes it difficult for BLM officials to track the efforts of BLM field offices in managing potential liability. Although BLM field office staff manually enter bond review data into AFMSS, the bond adequacy review policy did not instruct staff to do so until July 2010. In addition, as mentioned earlier, some field offices did not know the number of bond adequacy reviews they conducted in some fiscal years from 2005 through 2009. Consequently, the data that have been entered into AFMSS do not reflect the actual number of reviews that a field office may have conducted. For example, 8 of the 33 field offices survey respondents reported to us that they had conducted more bond adequacy reviews from fiscal years 2005 through 2009 than the total number recorded in AFMSS covering the period from May 1, 1990 through March 17, 2010. Table 10 shows the number of bond adequacy reviews reported to us by the BLM field offices we surveyed and the number of bond adequacy reviews recorded in AFMSS. A similar discrepancy occurs with data for idle and orphan well reviews in part because BLM’s policy does not direct field office staff to record idle and orphan well reviews in AFMSS. Consequently, the data that have been entered into AFMSS do not reflect the actual number of idle and orphan well reviews that a field office may have conducted. For example, 11 of the 33 field office survey respondents reported to us that they had conducted more idle and orphan well reviews from fiscal years 2005 through 2009 than the total number recorded in AFMSS covering the period from April 25, 2002 through March 12, 2010. In addition, of the almost 10,000 idle and orphan well reviews in AFMSS for all field offices, more than 68 percent of the records have blank date fields, making it impossible for BLM staff to know when an idle and orphan well review occurred by looking at data in AFMSS. Table 11 shows the number of idle and orphan well reviews reported to us by the BLM field offices we surveyed and the total number of idle and orphan well reviews recorded in AFMSS. To ensure that federal taxpayers do not have to cover the costs of reclamation for each of the approximately 93,000 oil and gas wells in BLM’s inventory, BLM has established a number of policies. However, most BLM offices have not fully implemented these policies because they have not always conducted bond adequacy reviews, consistently interpreted the policy for increasing bonds, identified all idle or orphan wells on federal land, or made progress in reducing their inventory of these wells. These deficiencies have the potential to increase the federal government’s exposure to paying for reclamation costs for idle and orphan wells on federal land. BLM’s ability to effectively manage potential liabilities is further impaired by a number of interrelated challenges. Because minimum bond amounts have not been updated or adjusted for inflation in more than 50 years, they may not be sufficient to serve as an incentive to encourage operators to comply with plugging and reclamation requirements and the cost to plug and reclaim a well site may far outweigh the value of the bond. As a result, BLM officials must devote additional time and resources to manage the potential liability, which is difficult given their limited resources and other agency priorities. This situation is further exacerbated by BLM’s vague policy for increasing bond amounts, which field offices have interpreted differently, and have not led to consistent and regular bond increases. BLM staff are also challenged by a lack of complete, consistent, and reliable data that can help them readily evaluate potential liabilities, make informed decisions about them, and evaluate agency performance for conducting bond adequacy and idle and orphan well reviews. We believe that the challenges BLM faces in managing potential liabilities are interdependent and cannot be solved in a piecemeal fashion. Instead we believe that BLM needs a comprehensive approach to address these challenges in a holistic fashion that will ensure that the agency has (1) a complete understanding of the extent of the potential liability, (2) adequate bond amounts to ensure that operators and not taxpayers pay for reclamation, and (3) appropriate processes to ensure that the agency is able to effectively manage and reduce this liability. To better manage potential liability on federal land, we recommend that the Secretary of the Interior direct the Director of BLM to develop a comprehensive strategy to include four actions: increasing regulatory minimum bonding amounts over time to strengthen bonding as a tool for ensuring operators’ compliance, revising the bond adequacy review policy to more clearly define terms and the conditions that warrant a bond increase, implementing an approach for ensuring complete and consistent well records in AFMSS so that BLM field and state offices can better evaluate potential liability and improve decisionmaking, and implementing an approach for better monitoring agency performance in conducting reviews for bond adequacy and idle wells. GAO provided Interior with a draft of this report for its review and comment. Interior concurred with all four of our recommendations and noted that, among other things, it has already started to take steps to improve the data in AFMSS to ensure the completeness and accuracy of these data. Interior’s written comments are presented in appendix II of this report. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Interior, the Director of BLM, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This appendix details the methods we used to examine the Department of the Interior’s (Interior) Bureau of Land Management’s (BLM) policies and efforts to ensure that the bonds for oil and gas wells are adequate to cover the cost of reclaiming land disturbed by oil and gas operations. Specifically, we were asked to (1) identify BLM’s policies for managing potential federal oil and gas liabilities, (2) determine the extent to which BLM has implemented these policies, and (3) describe the challenges, if any, BLM faces in managing potential oil and gas well liability. To identify BLM’s policies for managing potential liabilities, we first interviewed officials in BLM’s Washington, D.C., headquarters office to identify the BLM policies intended to manage these potential liabilities. These officials identified policies in two areas: bond adequacy and idle and orphan wells. The bond adequacy policy, which implements the increased bond amount regulation, included instructional memorandums (IM) IM 2006-206, Oil and Gas Bond Adequacy Reviews; IM 2008-122, Oil and Gas Bond Adequacy Reviews; and IM 2010-161 Federal Oil and Gas Bonds. The idle and orphan well policy included IM 2007-192, Priority Ranking of Orphaned and Idled Wells; Section 349(b) of the Energy Policy Act of 2005, which implements Section 349 of EPAct of 2005. We analyzed these policies to summarize the actions they outlined for BLM state and field offices. To determine the extent to which BLM has implemented its policies for managing potential liabilities, we reviewed laws and federal regulations related to onshore oil and gas bonding and idle wells on federal land and interviewed officials at BLM’s Washington D.C., headquarters office. We developed a Web-based survey, which we sent to all 48 BLM field offices with an oil and gas program and received responses from all these offices. Because some field offices work together to implement these policies by sharing staff resources, 15 of the 48 field offices we surveyed combined their responses, resulting in a total of 33 survey responses. We also interviewed officials in the 16 BLM field offices that collectively manage more than 85 percent of the oil and gas wells on federal land, as well as officials in the corresponding 6 state offices in which the 16 field offices are located. We also visited 12 BLM field and state offices. Table 12 shows the 10 BLM state offices and 48 field offices and notes which field offices combined their responses. Among other things, we asked officials in the 48 BLM field offices we surveyed to provide information on the number of bond adequacy and idle well reviews conducted each fiscal year from 2005 to 2009. We also asked these officials whether their field office had created an idle well action plan and completed the ranking procedures outlined in the idle and orphan well instruction memorandum. We also asked the field office officials to provide information on the most current inventory of idle wells their office manages. To evaluate the accuracy and completeness of these data, we compared the information the field offices provided with the oil and gas well production data, referred to as Oil and Gas Operation Report (OGOR) data, from Interior’s Office of Natural Resource Revenue (ONRR)—formerly a component of the Minerals Management Service—on all wells that had been idle for 1 year or longer. The OGOR data was extracted by ONNR on July 7, 2010. We present this data of idle wells on federal land, broken out by the number of years idle, in figure 3. To compare the OGOR data to the data provided by BLM field offices, we limited our set of OGOR data to cover a period through fiscal year 2009—the most current year with complete BLM data. This comparison is presented in table 9. To assess the reliability of the OGOR data provided by ONRR, among other things, we electronically tested all elements related to our analysis and met with agency officials who administer the systems. We found that these data were sufficiently reliable for the purpose of this report. We analyzed the production data to determine which wells on federal land are idle and the length of time since they last produced oil or gas. In our Web-based survey, we also requested the BLM field offices to submit detailed information on requests for increases in bond amounts from May 1, 2008 to December 16, 2010. We asked the field offices for information on the amount of the requested increases and the disposition of the requests. We used this information to determine the total number of bond increases requested, the value of the requested bond increases, the percentage of requests that were approved, and the total value of approved bond increases. We also asked the field offices to provide information on the wells they have plugged or returned to production since September 1, 2007. We used this information to determine the progress BLM field offices have made in reducing their inventory of idle wells. We also analyzed the information officials in the 16 BLM field offices provided during our interviews on BLM policies and steps these officials had taken to implement these policies. If the officials had not fully implemented the policies, we asked them why they had not done so. The practical difficulties of developing and administering a survey may introduce errors—from how a particular question is interpreted, for example, or from differences in the sources of information available to respondents when answering a question. Therefore, we included steps in developing and administering the survey to minimize such errors. We obtained comments on a draft of the survey from officials in BLM’s Washington, D.C., headquarters office. We also pretested the survey in person at three BLM field offices—Bakersfield, California, and Carlsbad and Farmington, New Mexico. We conducted these pretests to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the information could feasibly be obtained, and (4) the survey was comprehensive and unbiased. We made changes to the content and format of the Web-based survey after these pretests, based on the feedback we received. To identify the challenges BLM faces in managing potential liabilities, we analyzed the information officials in the 16 BLM field offices and the corresponding six state offices provided during our interviews on what challenges they face, if any, in implementing BLM policies for managing the potential liability on federal land; whether they had sufficient tools and resources to implement the policies; and what their views were on BLM’s bonding system and minimum bonding amounts. To assess challenges related to the electronic data available to BLM officials when evaluating and managing potential oil and gas liabilities, we requested electronic data, including OGORs from ONRR and data on bond adequacy reviews and idle well reviews from BLM’s Automated Fluid Minerals Support System (AFMSS). The OGOR data was extracted by ONNR on July 7, 2010. The AFMSS data was extracted by BLM on March 17, 2010; May 26, 2010; and October 1, 2010. First, we analyzed the OGOR and AFMSS data to count the number of idle wells (i.e., wells that have not produced for 7 years more) as of fiscal year 2009 for each field office, and we compared these data with idle well data provided by the 33 field offices we surveyed. We also grouped the idle wells we identified by the number of years they have remained idle. Second, we counted the number of bond adequacy reviews and idle well reviews in AFMSS and compared them with the number of reviews reported by the BLM field offices in our Web- based survey. Third, to determine the number of well records without bond numbers, we selected current well records in AFMSS without any information on their associated bond numbers and counted them by their unique 10-digit American Petroleum Institute number to identify individual wells that could not be easily associated with a bond. We present data from AFMSS regarding the number of wells managed by each of the 33 field offices. We believe AFMSS data are sufficiently reliable for this purpose although our audit work determined reliability issues for other AFMSS data used in other contexts. To better understand the perspectives of operators regarding BLM bonding requirements, we interviewed industry officials with the Independent Petroleum Association of New Mexico and the Interstate Oil and Gas Compact Commission. We conducted this performance audit from January 2010 to February 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Andrea Wamstad Brown (Assistant Director), Jeffrey B. Barron, Casey L. Brown, Ying Long, Kevin Remondini, Jerome Sandau, JulieMarie Shepherd, Carol Herrnstadt Shulman, Jeanette Soares, and Walter Vance made key contributions to this report.
|
The number of oil and gas wells on leased federal land has increased dramatically. To help manage the environmental impacts of these wells, the Department of the Interior's (Interior) Bureau of Land Management (BLM) requires oil and gas operators to reclaim disturbed land in a manner it prescribes. To help ensure operators reclaim leased land, BLM requires them to provide a bond before beginning drilling operations. BLM refers to oil and gas wells and leased land that will require reclamation as potential liabilities because BLM may have to pay for reclamation if the operators fail to do so. GAO was asked to determine (1) BLM's policies for managing potential federal oil and gas well liability, (2) the extent to which BLM has implemented these policies, and (3) the challenges, if any, BLM faces in managing potential oil and gas well liability. GAO analyzed agency data on bonding and wells and interviewed BLM officials. We surveyed all 48 BLM field offices with an oil and gas program, and received 33 responses covering these offices. To manage potential liability on federal land, BLM has developed policies for reviewing bond adequacy and for managing idle wells (wells that have not produced for at least 7 years) and orphan wells (wells that generally have no responsible or liable parties). The bond adequacy policy is intended to ensure that bonds are regularly reviewed by BLM field offices when certain events occur, or periodically, and increased as necessary to ensure that they reflect the level of risk posed by the operator. BLM's idle and orphan well policy is intended to ensure that nonproducing wells are either plugged or returned to production; this policy directs BLM field offices to develop an inventory of such wells and rank and prioritize them for reclamation based on potential environmental harm, among other things. BLM has not consistently implemented its policies for managing potential liabilities. Specifically, for fiscal years 2005 through 2009, GAO found that 13 of the 33 field office survey respondents reported that they either did not conduct any reviews or did not know the number of reviews conducted. Most field office officials told GAO that a lack of resources and higher priorities were the primary reasons for not conducting these reviews. In addition, BLM state offices also did not consistently interpret BLM policy on when to increase bond amounts. For example, officials in three state offices told GAO that they generally require evidence of operator noncompliance before raising a bond amount, while another state office increased bond amounts for most operators because it viewed them as a potential risk to the government. With regard to reviews of idle or orphan wells, 11 of the 33 field office survey respondents reported that they had not conducted any reviews in one or more fiscal years during the 5-year period GAO examined. The shortage of resources was identified by officials as the primary reason that these reviews were not conducted. In addition, 2 BLM state offices and 22 field offices have not created action plans for reviewing bond adequacy and idle and orphan wells, as BLM policies call for. BLM faces two challenges in managing potential liability, according to field office officials. First, its bonding system impairs BLM's ability to manage potential liability. Specifically, the minimum bond amounts--not updated in more than 50 years--may not be sufficient to encourage all operators to comply with reclamation requirements. These officials also stated that criteria in the policy for deciding when to increase a bond is vague, creating ambiguity about whether a request for an increase should be submitted and whether it will be approved. Second, limitations with the data system BLM uses to track oil and gas information on public land restrict the agency's ability to evaluate potential liability and monitor agency performance. For example, the BLM field offices GAO surveyed reported a total of about 2,300 idle wells that had been inactive for 7 years or more as of fiscal year 2009. However, other Interior data indicate that the number of idle wells on federal land is nearly double the amount reported by the BLM field offices. GAO recommends that BLM develop a comprehensive strategy to, among other things, increase minimum bond amounts over time and improve its data system to better evaluate potential liability and agency performance. In commenting on a draft of this report BLM agreed with GAO's recommendations and noted that it has already taken steps to improve the completeness and accuracy of its oil and gas data.
|
Medicare covers almost 49 million beneficiaries. Individuals who are eligible for Medicare automatically receive Part A benefits, which help pay for inpatient hospital, skilled nursing facility, hospice, and certain home health services. A beneficiary generally pays no premium for this coverage unless the beneficiary or spouse has worked fewer than 40 quarters in his or her lifetime, but the beneficiary is responsible for required deductibles, coinsurance, and copayment amounts. Medicare- eligible beneficiaries may elect to purchase Part B, which helps pay for certain physician, outpatient hospital, laboratory, and other services. Beneficiaries must pay a premium for Part B coverage, which generally Beneficiaries are also responsible for was $99.90 per month in 2012.Part B deductibles, coinsurance, and copayments. Beneficiaries electing to obtain coverage for Medicare services from private health plans under Part C are responsible for paying monthly Part B premiums and, depending on their chosen plan, may be responsible for a monthly premium to the Medicare plan, copayments, coinsurance, and deductibles. Finally, under Medicare Part D, beneficiaries may elect to purchase coverage of outpatient prescription drugs from private companies. Beneficiaries who enroll in a Part D plan are responsible for a monthly premium, which varies by the individual plan selected, as well as copayments or coinsurance. Table 1 summarizes the benefits covered and cost-sharing requirements for Medicare Part A and Part B, referred to together as Medicare fee-for-service. Many low-income Medicare beneficiaries receive assistance from Medicaid to pay Medicare’s cost-sharing requirements. For Medicare beneficiaries qualifying for full Medicaid benefits, state Medicaid programs pay for Medicare’s Part A (if applicable) and Part B premiums and cost- sharing requirements up to the Medicaid payment rate as well as for services that are not generally covered by Medicare. To qualify for full Medicaid benefits, beneficiaries must meet their state’s eligibility criteria, which include income and asset requirements that vary by state. In most states, beneficiaries that qualify for Supplemental Security Income (SSI) automatically qualify for full Medicaid benefits. Other beneficiaries may qualify for full Medicaid benefits through one of several eligibility categories that states have the option but are not required to cover, such as the medically needy category, which includes individuals with high medical costs. Congress created several MSPs—QMB, SLMB, QI, and QDWI—and, more recently, the LIS program to further assist low-income Medicare beneficiaries with their premium and cost-sharing obligations. Each program has different benefits, and beneficiaries qualify for different levels of benefits depending on their income. (See table 2.) Beneficiaries must also have limited assets to qualify for MSPs or LIS. MIPPA amended the asset limits for the QMB, SLMB, and QI programs to more closely align with the LIS limits beginning January 1, 2010. This raised the MSP asset limits for the first time since 1989 and ensured that those limits would be adjusted for inflation in the future. As with other Medicaid benefits, states have the flexibility to extend eligibility for MSP benefits to a larger population than federal law requires to be covered by implementing less restrictive income and asset requirements, for example by eliminating asset limits or not counting certain types of income. Therefore, eligibility requirements for MSPs vary across states, while requirements for LIS, which is administered by SSA, are uniform nationwide. MIPPA included several new requirements aimed at eliminating barriers to MSP enrollment. Specifically, MIPPA required SSA to, beginning January 1, 2010, transfer data from LIS applications, at the option of applicants, to state Medicaid agencies, and it required state Medicaid agencies to use the transferred information to initiate an MSP application. SSA was also required to make information on MSPs available to those potentially eligible for LIS, coordinate outreach for LIS and MSPs, and train staff on MSPs.included a number of other provisions related to MSPs. As mentioned earlier, MIPPA amended the asset limits for QMB, SLMB, and QI to more closely align with the limits for LIS. It also required CMS to translate a previously developed model MSP application into 10 languages other In addition to the above requirements, MIPPA than English. In addition, MIPPA included funding for states and other organizations to perform outreach for LIS and MSPs. Beginning January 2010, MIPPA also exempted certain types of income and assets from being counted when SSA makes a determination of LIS eligibility. For example, the law required that SSA not count the value of a life insurance policy as an asset. The law did not extend these changes to MSPs, but states have the option to make comparable changes to their programs. The treatment of the value of life insurance is one example of a potential difference in how LIS and MSPs count income and assets in determining program eligibility. In addition to the application transfers required under MIPPA, there are a number of other pathways to enrollment in MSPs. First, when a person applies for Medicaid, states may screen them for eligibility for MSPs. Second, some states offer a streamlined application to apply specifically for enrollment in MSPs. Finally, more than half of states automatically enroll beneficiaries whom SSA has determined to be eligible to receive SSI benefits. Once enrolled in MSPs, states periodically determine whether beneficiaries remain eligible for the program and either renew or cancel enrollment. States have different processes for doing this, some of which require more steps by the enrollee than others. In implementing the MIPPA requirements, SSA reported transferring over 1.9 million applications to states, made information available on MSPs to potentially eligible individuals, conducted outreach, and provided training to staff on MSPs. SSA spent about $12 million in the first 3 years in implementing the MIPPA requirements, and officials reported that these efforts did not significantly affect its workload. As required by MIPPA, SSA began transferring applications in January 2010, and SSA reported transferring over 1.9 million applications to states between January 4, 2010, and May 31, 2012. SSA officials told us that all states were able to receive LIS data when the transfers began in January 2010 and that applications are transferred to states each business day. Through the application transfer, SSA provides states with the following information: (1) all of the information reported by the applicant or modified by SSA, including information on household composition, income, and assets; (2) whether SSA approved or denied LIS enrollment and the reasons for denials; and (3) the date that the LIS application was submitted, as eligibility for SLMB, QI, and QWDI is retroactive to that date. SSA decided that transfers would occur after SSA determined eligibility for LIS, which generally occurs within 30 days. As a result, a number of elements of the application information transferred to states have been verified by SSA.affect benefits for certain applicants. Specifically, for those individuals enrolling in the QMB program, where benefits do not begin until the month after the state’s determination of eligibility, waiting for the SSA data transfer may result in the loss of a month or more of benefits. However, the timing of the transfer can SSA coordinated with CMS officials and state Medicaid agency officials about how to structure the exchange of application data. For example, SSA developed a standard data transfer agreement and signed an agreement with each state. In the months prior to implementation, SSA tested the data exchange with states in order to identify and resolve any concerns states had in receiving and using the transferred data. Finally, SSA programmed its data systems to transfer the applications as agreed with the states. SSA officials also told us that the agency designed the process to eliminate duplicate applications, applications with insufficient address data, and applications where the individual has opted out of the data transfer. In response to concerns raised by states once the transfers began, SSA also decided to delay the transfer of applications from individuals not yet eligible for Medicare until the applicant is within 1 month of eligibility. For 2011, SSA reported transferring 66 percent of all LIS applications where SSA determined eligibility to states to initiate an application for MSPs; and 13 percent of applications had applicants who opted out of the transfer and thus were not transferred. SSA officials told us the remaining 21 percent were not transferred for various other reasons beyond the applicant opting out of the transfer, such as the applicant was not yet eligible for Medicare or the applicant submitted a duplicate application. To implement the requirements to make information available to potentially eligible individuals and coordinate outreach, SSA took several steps. SSA made information, such as the model MSP application developed by CMS, available through its website and in local offices. SSA conducted an outreach campaign in 2009 to provide information on LIS and MSPs, including the changes that MIPPA made to the eligibility requirements for both programs. As part of the campaign, SSA held events and issued new promotional materials, which the agency provided to local Social Security offices, community organizations, and health providers’ offices. SSA also sent about 2 million outreach letters in 2009 to individuals previously denied LIS benefits alerting them that eligibility requirements for LIS and MSP would be changing in January 2010 and they could now be eligible for LIS as well as MSPs. Since January 2010, SSA has sent letters describing LIS and MSPs to several categories of potentially eligible individuals. (See table 3.) To train staff, SSA developed two video trainings for its employees on the MIPPA changes to LIS and MSPs and made the video trainings available on-line. SSA required those staff that would be interacting with individuals potentially eligible for LIS or MSPs to view the video trainings prior to January 2010. SSA also updated its policies and procedures manual to include instructions for employees in handling individuals’ questions about MSPs during routine contacts. For example, SSA’s policies and procedures manual instructs employees to tell individuals about the availability of MSPs and that in applying for LIS the individuals can initiate an MSP application with their state Medicaid agency unless they opt out. SSA’s manual also instructs employees not to help complete MSP applications but to refer individuals with MSP questions to either their local Medicaid office or to State Health Insurance Assistance Programs, which help individuals complete applications for Medicare and Medicaid benefits. In fiscal years 2009 through 2011, SSA spent about $12 million to implement the MIPPA requirements. Of the $24.1 million appropriated by MIPPA for the initial costs of implementing the requirements, SSA spent $9.2 million combined for fiscal years 2009 and 2010 ($4.5 million and $4.7 million respectively). The remaining $14.9 million in unspent funds remains available to SSA for future costs in meeting the requirements. In fiscal year 2011, SSA spent about $2.5 million of the $3 million appropriated under MIPPA for the ongoing administrative costs of carrying out the requirements. These costs were financed through its first annual agreement with CMS. For fiscal year 2012, CMS agreed to fund $2.8 million. SSA officials told us that, based on data available as of July 2012, they expected SSA’s workload, and therefore costs, to remain constant for fiscal year 2012. SSA officials told us that implementing the MIPPA requirements has not affected SSA’s overall workload significantly as measured by the staff time committed to implementation and to handling inquiries and calls about MSPs. For example, SSA officials reported that implementation required 17 full time equivalents (FTE) in 2009, 32 in 2010, and 8 in 2011 and indicated that the ongoing cost in staff time of meeting the requirements is relatively small. SSA officials told us that a larger amount of staff time was used in 2009 and 2010 because that was when SSA conducted its outreach campaign and designed and launched the application transfers, the latter of which required programming data systems, developing new procedures, and training staff. According to officials, some of the ongoing staff time will be dedicated to responding to inquiries and calls about MSPs. While SSA data indicated that the volume of field office inquiries and calls to its toll-free line related to MSPs increased since the requirements took effect, the volume was relatively small compared to the overall volume of inquires and calls SSA received. For example, in fiscal year 2011, SSA received about 53,000 calls related to MSPs out of a total of 76.8 million fielded through the toll-free line. SSA officials also reported that, for fiscal year 2011, SSA was under a hiring freeze. As a result, SSA officials noted that FTEs that had been devoted to MSP work have been diverted from some of SSA’s more traditional workloads, such as processing claims for Social Security benefits or issuing Social Security numbers. However, the funding appropriated under MIPPA supported the relatively small number of FTEs used to implement the requirements and will continue to do so through the funding agreements with CMS. MIPPA prohibits SSA from using its own administrative funding to carry out the MSP requirements, and, therefore, SSA intends to continue to rely on funding provided under the CMS funding agreements for these activities. Using CMS data, we estimated that MSP enrollment increased each year from 2007 through 2011. The largest increases in MSP enrollment occurred in 2010 and 2011 (5.2 percent and 5.1 percent respectively), the first 2 years that the MIPPA requirements were in effect. (See table 4.) During this period, Medicare enrollment also grew by approximately 2 to 3 percent each year, from about 44.4 million people in 2007 to about 48.7 million people in 2011. A number of factors may have contributed to the higher levels of growth in MSP enrollment in 2010 and 2011, including SSA application transfers and outreach, other MIPPA provisions and related changes to state policies, and the economic downturn. SSA application transfers. In response to our survey of state Medicaid officials about the effects of the application transfers on MSP enrollment, officials from 28 states reported that MSP enrollment has increased as a result of the application transfers. In contrast, officials from 12 states reported that the application transfers did not have an effect on MSP enrollment, and officials from the remaining 10 states reported they did not know the effect of the transfers. While there are no nationwide data that demonstrate the effects of the SSA application transfers on MSP enrollment, 3 of the 6 states we contacted to supplement our survey tracked some information on the outcomes of applications transferred by SSA.application transfers from SSA in 2011, Arizona reported enrolling about 800 of 16,000 applicants; Louisiana reported enrolling about As a result of the 3,300 of about 21,800 applicants; and Pennsylvania officials reported enrolling about 16,000 of 37,500 applicants. It is not clear, however, if these beneficiaries would have enrolled in MSPs through other means if the application transfers had not been in place. For example, these enrollees may have instead enrolled by applying directly through the state. SSA outreach. As previously mentioned, SSA completed an outreach campaign in 2009 and has sent letters with information about MSPs to millions of potentially eligible individuals. Our prior work indicates that letters sent by SSA to potentially eligible individuals in 2002 resulted in more beneficiaries enrolling in MSPs than would have likely enrolled without receiving an SSA letter. Other MIPPA provisions. The MIPPA provision that more closely aligned asset limits for MSPs with the limits for LIS expanded the number of beneficiaries eligible for MSPs in 2010. Specifically, the requirement effectively expanded eligibility in 41 states by increasing the asset limits. In addition, MIPPA-funded outreach conducted by states and other organizations that began in 2009 may have increased the likelihood that applications resulted in enrollment. According to data from the National Council on Aging (NCOA), the national resource center funded to track the outreach, grantees assisted about 200,000 individuals from January 2010 through December 2011 in submitting a complete MSP application. NCOA reported that grantees in most states are able to access the applications transferred by SSA to identify those beneficiaries who potentially need assistance completing the MSP application. Economic downturn. It is unclear how the economy affects the population potentially eligible for MSPs. In 2011, we reported that during the economic downturn, from 2007 through 2010, unemployment among those aged 65 and older doubled and food insecurity increased. In addition, awards of SSA disability benefits to those ages 50 to 64 increased. However, our past work also found that the percentage of adults 65 and older with incomes below 200 percent of the federal poverty level did not increase. Officials from four of the six states we contacted to supplement our survey reported making changes to Medicaid eligibility systems, specifically, changes to both information systems and business processes, to receive and act upon the applications transferred by SSA. For example, officials from Arizona reported modifying the state’s information system to accept the data and automatically create records for the individuals in the eligibility system and generate notification letters asking the applicants for additional information in order to complete the application. Officials also said that the state established new business rules for processing applications received through the transfers. Officials from Colorado, one of the two states that did not report making changes, told us that the state plans to make changes to its system pending the availability of funding to implement the changes. Because the state did not have the funds to make the necessary system changes, officials said that they had to develop an interim process, under which transferred applicants receive an assessment of MSP eligibility only if the applicant completes the state’s request for additional information. Officials from the final state, Pennsylvania, told us that the state did not make changes to its information system as a result of the application transfers but did establish business processes for sorting the applications and forwarding them to county assistance offices for processing. Officials from the final state said that they could not determine the effect of the application transfers on the state’s workload, including the effect on the volume of applications received. increased workload, said that it is difficult to determine the effect of the application transfers but that for some applications the transfers had reduced the time needed for processing. States identified several reasons why processing the applications transferred from SSA had increased their workload, including that the transfers include applications for those who are clearly ineligible for MSPs, applications have inaccurate information, and applicants do not understand that their application for LIS is triggering an application for MSPs. The increased workload may have resulted from SSA transferring applications for individuals who are ineligible for MSPs because their income or assets exceed the federal MSP eligibility limits or they are not yet eligible for Medicare. In response to our survey, officials from one state reported that over 70 percent of the applications received from SSA are ineligible for the state’s MSPs but that the state is still required to process the application. The officials noted that processing these applications is not a productive use of limited state resources. Officials from Pennsylvania, one of the six states we contacted to supplement our survey, reported that, of the approximately 37,500 applications transferred by SSA in 2011, about 14,600 had been denied LIS enrollment. Those rejected applicants represented a significant majority of the 21,600 rejected by the state for MSP enrollment. Officials told us that they have adjusted their process to automatically deny enrollment in MSPs for those individuals that were rejected by SSA for LIS because, for example, the person did not have Medicare or had income that exceeded the eligibility limits. In response to our survey and during interviews, officials from several states reported inaccuracies in the SSA data that may have made the applications more difficult for states to process. For example, Louisiana officials told us that the city of an applicant is sometimes misspelled in the SSA data. This triggers an error in the state’s system, which must be reviewed and corrected by the state. In response to our survey, officials from several states also indicated that the state spends time requesting information from applicants who do not provide it because they do not understand that they have applied for MSPs. For example, officials from Virginia commented that individuals do not realize that their application for LIS is triggering an application for MSP and do not end up providing the additional information needed for the state to make a determination of MSP eligibility. Arizona officials stated similar concerns and provided data indicating that 63 percent of all of the applications transferred by SSA and processed by the state in 2011 were denied because the applicant did not respond to the state’s request for additional information. The extent to which the SSA application transfers required system changes or affected workload may have depended on whether the state treated the transferred information as verified. Though CMS policy allows states to treat the information in the transferred applications as verified, in response to our survey, officials from 35 states reported requiring applicants to reverify some or all of the information before the state would determine eligibility for MSPs. States most frequently reported requiring applicants to reverify income, both earned and unearned, and assets.(See table 5.) Nine states reported requiring applicants to reverify all of the data elements transferred by SSA, including household size and identity. In the six states we contacted, we found some evidence to suggest that the application transfers had less of an effect on workload in states that treated the transferred information as verified. Specifically, of the three states that we contacted that accepted SSA’s verification of the application information, two states reported being able to enroll some of the transferred applicants with little to no work required of caseworkers. Louisiana officials said that the transfers have allowed the state to autoenroll some applicants (where the eligibility system enrolls the applicant using the data transferred by SSA with no need for a caseworker to enter data or contact the applicant). For example, from March 2010 through January 2012, Louisiana autoenrolled about 14 percent of applicants transferred by SSA (5,937 of 43,414). Officials said that the transfers have reduced the workload for these applications. Similarly, officials from Pennsylvania said that the number of applications received from SSA where caseworkers need to contact applicants for more information was small, because, in addition to treating the information as verified, the state has access to 12 different data sources that can be used to address any discrepancies in the SSA data and provide asset information that is not included in the SSA data. In contrast, in the three states we contacted that required applicants to reverify some of the information (Arizona, Colorado, and Florida), the verification process included applicants reporting and documenting income and reporting and attesting to the accuracy of other information, such as assets and citizenship. This verification process included multiple steps by states and applicants. Differences in how SSA and states count income and assets for LIS versus MSPs may have driven states’ choices to require further verification of information in the transferred applications. For example, several states noted that the LIS application combines income for a couple, whereas the state needs to know the income for each spouse separately to determine eligibility for MSPs. Officials from Arizona, one of our selected states that requires applicants to reverify income, explained that the state needs to know the income of each spouse as well as any dependent children living in the household to determine eligibility for MSPs. In a February 2010 letter to state Medicaid directors with guidance on implementation of the MIPPA requirements, CMS noted that SSA has a more expansive definition of a household in determining eligibility for LIS than what most states use to determine MSP eligibility. The guidance reminded states that they have the option to align their definition with SSA’s, and noted that doing so would expand eligibility for MSPs to more people and reduce states’ administrative burden in processing the applications transferred by SSA. Some states also count certain types of income and assets that SSA does not. For example, SSA does not count the value of life insurance policies against the asset limit, but states count it unless the state has amended its Medicaid plan to disregard it. States must verify whether applicants have life insurance policies either by contacting the applicant or through another data source. Historically, MSPs have had low enrollment rates, with the Congressional Budget Office estimating in 2004 that only a third of eligible individuals were enrolled in the QMB program and an even smaller percentage in the SLMB program. Our estimates show that enrollment has grown each year for the last 5 years, with the largest increases occurring in 2010 and 2011 (5.2 percent and 5.1 percent), the first 2 years the MIPPA requirements were in effect. The differences between how income and assets are counted for LIS and MSPs make it difficult for some states to act on the applications transferred by SSA without requiring additional information from applicants, a step that requires additional work by the state and can present a hurdle to applicants. Aligning the methods for determining income and assets for MSPs with those of LIS is an option currently available to states, and some states have used that flexibility. More states may not have opted to do so because aligning these methods would likely expand the number of individuals who are eligible only for MSP, and not for other Medicaid, benefits. Because providing MSP benefits to such individuals is likely to increase costs to the state, states have no immediate financial incentive to provide MSP benefits to these individuals. Further, while aligning these methods may allow states to more easily act upon the applications transferred by SSA, it would create a method for counting income and assets for MSPs that may differ from how states assess eligibility for Medicaid, making it more complicated for states to assess MSP eligibility as part of assessing eligibility for Medicaid. We provided a draft of this report to HHS and SSA to review. HHS did not provide comments. SSA stated, in an e-mail, that the report accurately describes its implementation of the requirements. SSA also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Administrator of CMS, the Commissioner of SSA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. To describe the change in Medicare Savings Program (MSP) enrollment from 2007 through 2011, we used data from the Centers for Medicare & Medicaid Services (CMS) to estimate annual enrollment and the change in annual enrollment over that period. The data, reported by states to CMS, included state-level information on the number of Medicare beneficiaries for whom states will pay the Medicare Part B premium. For our estimates we used data that represented the number of beneficiaries for whom states financed the Part B premium in December of each year. The data do not reflect enrollment for Qualified Disabled and Working Individuals, which CMS officials estimated numbered less than 300 people nationally as of March 2012. In addition, the data include some Medicare beneficiaries who are not eligible for MSPs but for whom states finance the Part B premium. We excluded some but not all of these beneficiaries from our analysis. Specifically, we excluded those beneficiaries categorized as “medical assistance only” as those beneficiaries are not eligible for MSPs per CMS’s policy manual. We were not able to exclude those categorized as “medically needy”—beneficiaries who may or may not also meet the eligibility requirements for an MSP— because CMS does not have data on this population for each of the years in our analysis. It is also likely that for a small percentage of beneficiaries, states did not specify the basis of eligibility, and therefore it is unclear whether they were eligible for MSPs or not. While CMS does not have data for each of the years in our analysis on the number of beneficiaries categorized as medically needy or with an unspecified eligibility category, 4 percent were medically needy and 4 percent did not have an eligibility category specified as of May 8, 2012. Though our estimates of enrollment may be overstated, we believe that our estimates of the change in enrollment over the 5-year period are valid. To assess the reliability of CMS’s data on MSP enrollment, we interviewed CMS officials about their efforts to ensure the quality of the data and reviewed the CMS policy manual outlining the requirements states must follow in reporting the data. We also asked officials about the limitations of the data and reviewed any statements about data limitations in published reports. Finally, we reviewed data for each month of 2007 through 2011 to identify any anomalies in the data. We determined the data to be sufficiently reliable for the purposes of estimating the changes in MSP enrollment nationally over time; where relevant we stated the limitations of the data in the findings. In addition to the contact named above, Kristi Peterson, Assistant Director; Jeremy Cox, Assistant Director; Susan Barnidge; Krister Friday; Sandra George; Kristin Helfer Koester; Lisa Rogers; and Paul Wright made key contributions to this report.
|
Congress established four MSPs and the LIS program to help low-income beneficiaries pay for some or all of Medicares cost-sharing requirements. Historically low enrollment in MSPs has been attributed to lack of awareness about the programs and cumbersome enrollment processes through state Medicaid programs. MIPPA included requirements for SSA and state Medicaid agencies aimed at eliminating barriers to MSP enrollment. Most notably, MIPPA created a new pathway to MSP enrollment by requiring SSA, beginning January 1, 2010, to transfer the information from a LIS application to the relevant state Medicaid agency, and the state must initiate an application for MSP enrollment. MIPPA also required GAO to study the effect of these requirements. This report describes (1) SSAs implementation of the requirements; (2) how MSP enrollment levels have changed from 2007 through 2011 and the factors that may have contributed to those changes; and (3) the effects of the MIPPA requirements on states administration of MSPs. GAO reviewed documents and data on SSAs efforts to transfer applications and implement other MIPPA requirements, analyzed MSP enrollment data from CMS, surveyed Medicaid officials from the 50 states and the District of Columbia, and contacted officials from 6 states selected, in part, because they accounted for over 20 percent of MSP enrollment. The Social Security Administration (SSA) took a number of steps to implement the Medicare Improvements for Patients and Providers Act of 2008 (MIPPA) requirements aimed at eliminating barriers to Medicare Savings Program (MSP) enrollment and spent about $12 million in fiscal years 2009 through 2011 to do so. SSA reported transferring over 1.9 million Low-Income Subsidy (LIS) program applications to state Medicaid agencies between January 4, 2010 and May 31, 2012. SSA also took steps to make information available to potentially eligible individuals, conduct outreach, and train SSA staff on MSPs. In fiscal years 2009 and 2010, SSA spent $9.2 million of the $24.1 million appropriated by MIPPA for initial implementation costs, and in fiscal year 2011, SSA spent about $2.5 million of the $3 million appropriated by MIPPA for ongoing administrative costs. SSA officials told GAO that implementing the MIPPA requirements has not significantly affected its overall workload and that SSA expects funding provided under the law to be sufficient to carry out the requirements. Using data from the Centers for Medicare & Medicaid Services (CMS), GAO estimates that MSP enrollment increased each year from 2007 through 2011. The largest increases occurred in 2010 and 2011 (5.2 percent and 5.1 percent respectively), the first 2 years that the MIPPA requirements were in effect. Several factors may have contributed to the higher levels of growth in MSP enrollment during these 2 years, including SSA application transfers and outreach, other MIPPA provisions related to MSPs, and the economic downturn. For example, while there are no nationwide data demonstrating the effects of the SSA application transfers, officials from 28 states reported that MSP enrollment had increased as a result of the transfers. Officials from most of the six states GAO contacted to supplement its survey reported that the SSA application transfers led to changes in eligibility systems and had increased the state's workload, that is, the time spent processing MSP applications. The extent to which the application transfers resulted in system or workload changes may have depended on whether states accepted SSA's verification of the information transferred, as allowed under CMS policy. In response to GAO's survey, officials from 35 states reported that the state required the applicant to reverify at least some of the information. GAO found from interviews with officials from selected states that requiring reverification from applicants included multiple steps by the state and applicant. In contrast, officials from two states that accepted SSA's verification of the information told GAO that the state was able to enroll some of the applicants transferred by SSA with little to no work required by caseworkers. Differences in how SSA and states count income and assets when determining eligibility for LIS versus MSPs may have driven states' decisions to require verification from applicants. States have the flexibility under federal law to align methods for counting income and assets for MSPs with those for LIS and doing so may reduce the administrative burden of processing the transferred applications. However, doing so would likely increase enrollment and, therefore, increase state Medicaid costs. SSA, in an e-mail, agreed with GAO's description of its implementation of MIPPA requirements.
|
In January 2003, we designated federal real property as a high-risk area because of long-standing problems with excess and underutilized property, deteriorating facilities, unreliable real property data, and overreliance on costly leasing. Real property is generally defined as land and anything constructed on, growing on, or attached to land. In updates to our high-risk report, we acknowledged that the administration and real- property-holding agencies had made progress toward strategically managing federal real property and addressing some long-standing problems. Real-property-holding agencies had, among other things, designated senior real property officers, established asset management plans, standardized real property data reporting, and adopted various performance measures to track progress. The administration also established a Federal Real Property Council (FRPC) that supports reform efforts. FRPC has created the FRPP to be the inventory system for the federal real property portfolio. FRPP, which is overseen by OMB, includes 25 data elements that agencies are mandated to report annually, including performance measures on asset utilization, condition, mission dependency, and operating cost. Although progress has been made, in 2007, we also reported that the problems that led us to designate real property as a high-risk area still largely persisted, such as repair and maintenance backlogs, and the underlying obstacles remained. We also reported on the condition of facilities at the Smithsonian Institution, where we found that the deterioration of facilities had threatened collections and increased the cost of restoring historic items. We recommended that the Smithsonian make improvements to its cost estimates for facilities projects. According to data reported in the 2007 FRPP, the federal government owns about 1,115,000 real property assets worldwide with a replacement cost of over $1.5 trillion. The six agencies we reviewed reported that they had 568,618 real property assets in the U.S. with a replacement cost of approximately $1.2 trillion. Five of the six agencies estimated that their assets had approximately $30.5 billion in repair needs. DOD did not estimate repair needs for its FRPP reporting. FRPP does not require agencies to report their repair and maintenance backlogs, but requires agencies to determine a condition index for each asset by computing a formula that compares the asset’s repair needs with its plant replacement value (PRV). Specifically a condition index equals (1-repair needs/PRV) * 100. Based on this formula, a condition index is reported as a whole number from 1 to 100, with 100 representing the best possible condition for an asset. FRPP guidance defines repair needs as “the amount necessary to ensure that a constructed asset is restored to a condition substantially equivalent to the originally intended and designed capacity, efficiency, or capability.” Real-property-holding agencies are generally responsible for the cost of maintaining and repairing their assets. We have reported that owning an asset creates an implicit fiscal exposure for the government. This fiscal exposure is created because there is an expectation that the government will incur costs associated with maintaining and operating the assets it owns. As the National Research Council has observed, federal assets must be well maintained to operate adequately and cost effectively; protect their functionality and quality; and provide a safe, healthy, productive environment for the American public, elected officials, federal employees, and foreign visitors who use them every day. Facilities and the systems within the facilities such as electrical, heating, and air conditioning systems and roofs generally have a finite, expected useful life, over which time they should be maintained and after which time they can be reasonably expected to need replacement. The useful lives of facilities can be extended through adequate and timely repairs and maintenance. Conversely, delaying or deferring repairs and maintenance can, in the short term, diminish the quality of building services, and in the long term, shorten building lives and reduce asset values. Deferring needed maintenance indefinitely may ultimately result in significantly higher costs. At the six agencies we reviewed, we found processes in place for the agencies to periodically assess the condition of their assets—processes that the agencies also generally used to identify repair and maintenance backlogs for their assets. However, the agencies differed in how they conducted these condition assessments and how they define and estimate their repair and maintenance backlogs. Thus, the information is not comparable across agencies and cannot be used to understand the government’s potential fiscal exposure associated with its real property repair and maintenance needs. Each agency we reviewed conducted facility condition assessments either itself or through a contractor to identify repair and maintenance deficiencies associated with their assets. The intent of conducting a facility condition assessment is to obtain an overall understanding of the condition and repair and maintenance needs of an asset. Condition assessments can range from staff walking through a facility and visually inspecting its condition and identifying repair and maintenance issues to a more comprehensive assessment in which the individual building systems, such as the plumbing, heating, and electrical systems, are assessed by a professional and deficiencies are identified. Condition assessments may also identify projects for future years, such as a roof replacement expected within the next couple of years. As shown in table 1, each of the six agencies we reviewed periodically conducted condition assessments. How agencies define and estimate their repair needs or backlogs varies. This variation is not unexpected because, according to OMB officials, FRPP was purposefully vague in defining repair needs so agencies could use their existing data collection and reporting processes. In addition, there is no governmentwide definition or reporting requirement for repair and maintenance backlogs. DOE requires its sites to perform condition assessments on all real property assets at least once during any 5-year period (some assets, such as nuclear facilities, are assessed more frequently). The results of the assessments are reported to a DOE-wide database. While individual DOE sites have some flexibility in which assessment surveys they use, inspection methods must be in accordance within general DOE guidelines. For example, one DOE laboratory developed its own assessment tool in the early 1990s and uses in-house inspectors to perform the assessments, while other sites use contractors to conduct their assessments. For all the assessments, each identified deficiency is assigned an optimum year for correction through maintenance. If a maintenance activity is not performed within the optimum period, it is considered deferred maintenance and part of DOE’s backlog. The condition assessments also include cost estimates, developed using nationally-recognized databases of repair costs, for correcting the deficiencies. NASA has used a contractor since 2002 to conduct annual deferred maintenance assessments of all its facilities and their component systems. NASA contractors visually assess nine different systems within each facility (such as the roof and the electrical system), and rate each facility using an overall condition index with a scale from 0 to 5. Based on that rating, the contractor uses an industry cost database and other information to estimate the costs of correcting the identified deficiencies. According to NASA officials, using a contractor and a standard estimating methodology to assess all its facilities provides consistent information across sites. DOI has comprehensively assessed the condition of what it calls its standard assets such as roads, bridges, trails, water structures, and buildings but has not yet conducted contractor-performed comprehensive assessments of the condition of heritage assets such as monuments, fortifications, and archeological sites. In May 2008, DOI issued guidance on how to estimate the condition of and maintenance costs associated with its heritage assets. For assessed assets with a value over $5,000, DOI conducts annual inspections to determine the condition of an asset and to determine the nature of needed repairs. DOI conducts condition assessments of assets with a value over $50,000 at least every 5 years to identify and estimate the cost of correcting repair and maintenance deficiencies. Either contractors or internal bureau staff perform the assessments and industry-standard cost-estimating databases are used, if available, to estimate the costs to correct identified deficiencies. If maintenance is needed, work is scheduled; if the work is not completed on time, it becomes part of DOI’s backlog. VA uses contractors to conduct facility condition assessments to evaluate the condition of its assets at least once every 3 years. The contractor inspects all major systems in each building (e.g., structural, mechanical, plumbing, and others) and gives each a grade of A (for a system in like- new condition) through F (for a system in critical condition that requires immediate attention). As part of this assessment, the contractor uses an industry cost database to estimate the correction costs for each system graded D or F—in poor or critical condition. VA’s reported backlog is the sum of all identified correction costs. In addition, if repair and maintenance is not completed, VA escalates the correction cost each year for inflation. GSA assesses all of its assets and estimates all repair and maintenance needs that may need to be done in the next 10 years. GSA conducts inspections known as physical condition surveys every 2 years on each asset. From these, GSA develops what it refers to as its reinvestment liability, which includes cost estimates for repair and maintenance items that GSA has determined need to be done now and expects will need to be done within the next 10 years. To conduct physical condition surveys, GSA staff walk throughout each facility answering a list of 37 standard questions about the asset and identifying the time frame within which the identified needs should be corrected, ranging from immediately to 6 to 10 years from now. GSA staff also develop cost estimates to repair each identified need. According to agency officials, the use of a standard survey allows some comparison between assets. DOD reported a condition index to FRPP based on what it calls a quality rating (Q-rating), ranging from Q1 (best condition) to Q4 (poorest condition). As shown in table 2, three of the four services determined the Q-rating by comparing an asset’s estimated repair and maintenance costs to the asset’s value. The fourth service assigned Q-ratings by considering the adequacy and age of the asset. DOD reported one of four condition indexes for its assets to FRPP based on the Q-rating of the asset. Thus, DOD did not provide an estimate of its repair and maintenance backlog. In determining Q-ratings for their assets, officials from the Army, Navy, and Marines told us that they used the results of facility assessments. According to these officials, these assessments were conducted either annually (by the Army), in 2005 (by the Navy) or at various times (by the Marines). According to Air Force officials, the Air Force totaled the cost of all maintenance projects for each asset but did not inspect the assets to determine if the assets had other repair and maintenance needs. For its fiscal year 2008 reporting, DOD plans to report the condition index for its assets as a percentage value consistent with FRPP rather than using the Q1-Q4 rating scheme. Because agencies define their backlogs differently, estimates cannot be compared across agencies or totaled to obtain a governmentwide estimate. For example, as discussed above, DOE, NASA, and DOI include the costs of all backlog work identified on their assessed assets while VA includes the cost of work on asset systems in the poorest condition and GSA includes costs for work it has identified to be done up to 10 years in the future. Additionally, because these estimates are not comparable, the condition indexes reported in FRPP cannot be compared across agencies to understand the relative condition or management of agencies’ assets. Thus, condition indexes should not be used to inform or prioritize funding decisions between agencies. While not comparable between agencies, backlog information collected in a consistent manner over several years can be useful within individual agencies for tracking trends. NASA officials noted, as of October 2008, they have 5 years of data from their annual assessment reports, which they are using to examine trends. The data show that NASA’s backlogs have been increasing recently, but the rate at which it has increased dropped between fiscal years 2007 and 2008. The consistency of reporting established by FRPC should allow for trend analysis for individual agencies starting with the 2008 data. While intra- agency trends could provide useful information for policymakers, it is not possible to compare backlog data between agencies since agencies develop their estimates differently. While there is no governmentwide reporting of repair and maintenance backlogs, agencies have been required to report deferred maintenance as part of their annual financial statements since 1996, and governmentwide totals for deferred maintenance have then appeared annually in the Financial Report of the U.S. Government. Since 1999, agencies have reported deferred maintenance as required supplemental information, which is not audited. For the six agencies we reviewed, we found differences in the basis of their deferred maintenance reported in their financial statements similar to the differences we found in their reporting of repair and maintenance backlogs. Statement of Federal Financial Accounting Standards No. 6, as amended, defines deferred maintenance as “maintenance that was not performed when it should have been or was scheduled to be and which, therefore, is put off or delayed for a future period.” The definition excludes any activities that would expand or upgrade an asset from its originally intended use (such as capital improvements) and any maintenance on an asset that is in acceptable condition. Federal Accounting Standards Advisory Board (FASAB) standards allow each agency’s management to both define “acceptable condition” and determine if its assets are in acceptable condition. FASAB staff told us that agencies use different methods to estimate their deferred maintenance and the standards for reporting are designed to accommodate these different methods. FASAB is currently considering a project to review requirements for reporting deferred maintenance as well as asset impairment. We found that the six agencies’ deferred maintenance estimates reported in their financial statements, like their backlog estimates, were not comparable. Specifically, DOE, NASA, and DOI equate deferred maintenance with their backlogs. For these agencies, the estimated repair and maintenance costs identified through their condition assessments for all assets are reported in the agencies’ deferred maintenance estimate. However, officials from all three agencies said that they do not consider their assets be in unacceptable condition just because they have some identified deferred maintenance associated with them. DOD reported about $72 billion in deferred maintenance for 2007. This figure represents the cost to repair and modernize each facility so that it is in acceptable operating condition, which is defined differently within each of DOD’s services. According to DOD’s 2007 Financial Report, this estimate includes costs that are not precisely equivalent to deferred maintenance, but the costs were reported because they are considered “generally representative” of the magnitude of the agency’s deferred maintenance requirements. GSA officials said that GSA has no reportable deferred maintenance because it has determined that, at the overall portfolio level, their building inventory is in acceptable condition. However, GSA noted in its 2007 financial statements that it has approximately $6.3 billion in capital improvements that are not normal repair and maintenance costs. Since capital improvements are not classified as deferred maintenance under the accounting standard, these costs are not considered deferred maintenance. Similarly, VA’s reported deferred maintenance does not include capital projects or assets with less than $100,000 in estimated repairs. VA officials told us that VA’s deferred maintenance estimate is used only to comply with FASAB’s requirement and does not represent the cost to repair and maintain VA’s facilities. The estimates for both backlogs and deferred maintenance cannot be used to provide a governmentwide perspective on the cost of repair and maintenance needs. While officials at the six agencies we reviewed use these estimates internally to help inform their real property decisionmaking, the estimates are based on industry-standard cost factors and are not detailed estimates of project costs. According to officials at each agency, these estimates should not be viewed as accurate cost estimates for repair and maintenance, but are valid as an indicator of the magnitude of work that an asset needs. In addition, these estimates occur at a single point in time. The actual repair and maintenance project for an asset may occur well after the deferred maintenance or repair needs are estimated, and construction costs can rise significantly after the estimates are made but before the project is undertaken. Also, some officials told us that while these estimates address the cost to correct identified deficiencies, as projects are bundled together and a work plan is determined, additional work may need to be done to complete the project. For example, additional work such as removing and replacing ceilings to access pipes or reconfiguring a space to accommodate new systems equipment may need to be done although it was not in the estimate to correct the identified deficiency. In addition, FRPP requires agencies to report data on every asset. As a result, agencies reported backlog estimates associated with assets that are inactive, that are not critical to their missions, or that have been identified for demolition in the next few years. In addition, for those agencies that equate deferred maintenance with backlogs, their deferred maintenance estimate also included the costs associated with these assets. Agencies may not have any intention of repairing some assets and would not seek funding for the identified repair and maintenance deficiencies. Thus, for some agencies, simply totaling the estimated repair and maintenance cost for each asset may overstate the costs. Each agency that we reviewed manages its backlog as part of its overall real property management. Agencies focus on maintaining and repairing assets that are critical to safety and accomplishing their missions, and each agency has processes in place to prioritize repair and maintenance work based on the potential impact of not doing the work on the agency’s mission. In addition to performing the identified repair and maintenance work, agencies use other techniques, such as asset disposal and replacement, to reduce their overall repair and maintenance backlogs. In spite of these efforts, agency officials generally expect their backlog estimates to increase as the federal portfolio of real property continues to age and the cost of making repairs increases. Real property managers at the six agencies told us that it is more important to prioritize repair and maintenance work on the basis of safety and the potential impact of not doing the work on the agencies’ missions rather than on when the work was identified to be done. DOE is the only agency we reviewed with a specific program to reduce its repair and maintenance backlog. DOE’s National Nuclear Security Administration (NNSA) has the Facilities, Infrastructure and Recapitalization Program (FIRP), which was established in 2000 to reduce NNSA’s repair and maintenance backlog from the 1990s. The current goal for the program is to eliminate $900 million of this 1990s-era backlog by 2013. The program does not address new growth in the backlog. So far, the program has eliminated about $500 million of this backlog. For example, one DOE laboratory recently used FIRP funds to build four new office buildings because staff were moved into the new buildings from older buildings that had a backlog from the 1990s. Agency officials—both at headquarters and at the sites we visited—told us that they prioritize repair and maintenance for assets that they consider to be important to their mission when deciding which projects to fund. Many of the sites we visited used a risk assessment process to prioritize their projects for funding. This process considers the probability of a failure, such as an electrical outage or a roof leak, and the probable impact of such a failure on the agency’s mission. The higher the probability of failure and the higher the probable impact of such a failure on the agency’s mission, the higher the priority the project would receive for funding. Projects related to safety also received high priority for funding. The following are illustrative of comments we heard from agency officials on our site visits: At VA’s Palo Alto Medical Center, mission is the main factor that determines project priorities, and the focus is on patient care buildings. Administrative buildings are always a lower priority. If a building does not house any patients or research, then it may not be as thoroughly studied for seismic issues and is a lower priority for funding. At NASA’s Ames Research Center, officials told us that they prioritize repair and maintenance projects based on how the project will affect the center’s mission, safety, or compliance with new regulatory requirements. As a result, employees at Ames are able to accomplish the center’s mission. According to NASA officials, this prioritization is typical for all NASA Centers. At DOD’s Travis Air Force Base, maintenance officials told us that they focus their repair and maintenance funds on those buildings that directly affect the mission of the base, such as airplane hangars and runways. As a result, those facilities are in good condition. At DOI’s Patuxent Wildlife Refuge, priority is given to health and safety concerns and those assets that are concerned with wildlife. According to Patuxent officials, caring for wildlife is the core mission of the refuge and therefore repair and maintenance items for facilities that affect wildlife receive higher priority than items that affect other buildings, such as offices. At GSA’s federal office building in New Carrollton, Maryland, officials told us that they prioritize repair and maintenance work based on how the repair need affects the customer and the extent of any safety concerns. At DOE’s Lawrence Livermore National Laboratory, a facilities governance board develops a prioritized list of repair and maintenance projects by considering the effect on the laboratory’s mission and the probability of failure. The laboratory’s program staff determine the potential effect on mission and provide input into the prioritization of projects. Some agencies have developed other tools, processes, and performance measures to help manage their real property portfolios and prioritize repair and maintenance projects. For example, DOI established an agencywide process for prioritizing assets based on its mission. Specifically, DOI uses an asset priority index (API) in combination with information on an asset’s condition to establish a clearer link between individual assets and mission, and to assist managers in deciding where to focus their resources. API scores range from 0 to 100 and are based on two components—mission dependency (80 percent) and asset substitutability (20 percent). Mission dependency criteria are determined at the bureau level and reflect each bureau’s unique mission. For example, the National Park Service ranks its assets as having high, medium, low, or no importance in three areas: resource protection, visitor services, and park operations. Assets are scored on substitutability depending on whether there is a substitute asset that can perform comparable functions and serve a comparable purpose. The Washington Monument, for example, is unique and would receive the highest score in this category. On the other hand, if there are two similar warehouses close to each other, they would score much lower. After considering health and safety priorities, API scores are compared with the condition of each asset and those with high API scores and low condition ratings are generally given priority for repair and maintenance projects while those with low API scores and low condition ratings are considered for disposal. NASA requires their Centers to conduct their own detailed condition assessments at least every 5 years. These assessments, which are separate from the annual deferred maintenance assessments, are used by the Centers to identify and prioritize repair and maintenance projects. According to officials at the Ames Research Center, their assessment focuses more on active, mission-critical assets and repairs and maintenance that they will try to get funded within the next 5 years. Information provided by NASA’s centers identified a backlog of about $1 billion, far lower than the $2.3 billion in deferred repair and maintenance needs NASA report for fiscal year 2007. According to NASA officials, the backlog reported by these individual NASA centers is lower than the deferred repair and maintenance needs NASA reported because the centers include only the most important projects that they believe should receive funding, instead of all projects to address their backlog as estimated in NASA’s annual deferred maintenance assessment report. Within each agency that we reviewed, repair and maintenance projects can be prioritized at different levels. For example, while DOI has an agencywide policy about how each bureau should prioritize repair and maintenance projects, DOD generally provides the base commander (or equivalent official responsible for a military base) with substantial discretion in deciding how to prioritize repair and maintenance projects. GSA officials told us that their prioritization process is a collaborative effort between property managers, asset managers, and other regional staff, and headquarters staff. At NASA, the centers assign priorities, with headquarters involved in the funding decisions for more expensive projects. At VA, projects are prioritized first at the local level, then at the regional and national levels. While projects are prioritized at different levels within an agency, each project competes against other potential projects within that agency but does not compete with projects at other agencies. Agency officials told us that they have a few strategies to address their repair and maintenance backlogs aside from correcting the identified deficiencies. Specifically, officials at DOD, DOE, DOI, GSA, and NASA told us that disposing of buildings and structures that no longer serve their missions, including through demolition, is an effective way to reduce their repair and maintenance backlogs. As these buildings are disposed, the repair and maintenance backlogs at the buildings are eliminated. However, agency officials told us that it can be expensive to demolish a building and they cannot always demolish as many as they would like. Officials at DOI’s Patuxent Wildlife Refuge told us that they would like to demolish 20 to 25 buildings, but they have not received the funds to do so. NASA has a program to demolish buildings that has been funded at $10 million annually, but officials said that this is just “a drop in the bucket” when compared to the buildings it would like to demolish. According to DOE officials, it has eliminated 15 million square feet of space since fiscal year 2002. Officials at multiple agencies also told us that, when they determine it is appropriate to dispose of a building, their primary motivation is not always to reduce their backlog, but this can be an added benefit. Agencies can also reduce their backlogs through “replacement by construction.” Using this strategy, an agency can decide that while it still needs the space it is more cost-effective to dispose of a building and build a new one than to repair the existing building. For example, NASA plans to demolish seven older buildings and replace them with a new multi-use office building at one of its Centers. When this work is done, the repair and maintenance backlogs at the seven buildings will be eliminated. GSA officials also said that they are using this tool at ports of entry to replace border stations. These officials noted that GSA and other agencies are often limited in their ability to use this tool because of its impact on the federal budget, since federal budget scorekeeping rules require the full cost of construction to be recorded up-front in the budget. Despite these strategies, agency officials told us that they generally expect their repair and maintenance backlogs to increase. Specifically, officials at five of the six agencies we reviewed told us that needs increase as buildings age and a good portion of their current portfolio is more than 30 years old. As a result, these assets will require more money for operations and maintenance and building systems are reaching the point where they are expected to be replaced. For example, officials at one site told us that given current conditions, they estimate that their backlog may grow from $75 million in fiscal year 2008 to $107 million in fiscal year 2012, mainly because a large number of assets are nearing the end of their useful lives and will need replacing over the next 5 years. Agency officials also told us that, as facility inspections and real property information continue to improve, agencies could discover greater repair and maintenance needs. For example, while park staff have conducted annual condition assessments of the Golden Gate National Park’s fortifications and other unique assets, they expect the backlog associated with the assets to increase significantly once a contractor performs a comprehensive condition assessment. Finally, as construction costs increase, as they have done over the last several years, the cost of repair and maintenance work may increase contributing to a rise in agencies’ backlogs. Officials at the six agencies we reviewed told us that there is a relationship between the level of repair and maintenance funding and agencies backlogs. DOD officials told us that they have invested in restoring, modernizing, and replacing some assets and they expect their backlog associated with these assets to decrease in the next 4 years. As mentioned earlier in this report, DOD has developed a model to determine the cost of sustaining its facilities. In theory, if repair and maintenance work is funded to sustain facilities, backlogs will not occur. According to a DOE official, DOE is committed to funding maintenance at industry standard levels. The Department’s maintenance expenditure grew by about 64 percent from fiscal year 2003 through fiscal year 2007 and reported backlog decreased by 3 percent. In contrast, the maintenance budget at one NASA Center went down by about 40 percent from fiscal year 2005 (when the maintenance budget was $14.5 million) to fiscal year 2006 (when the maintenance budget was $10.4 million). The maintenance budget has since remained fairly constant through fiscal year 2008. According to officials at this Center, this funding history has directly contributed to the growth of the center’s repair and maintenance backlog, and they expect their backlog will continue to increase. At the six agencies we reviewed, officials have managed their facility repairs and maintenance to minimize the impact of their backlogs on the agencies. Officials said that their repair and maintenance backlogs have generally not affected the ability of their agencies to accomplish their missions, but the backlogs have led to higher operating and maintenance costs and short-term inconveniences. Also, some officials cautioned that their backlogs create a real potential for an unanticipated incident to occur that could adversely affect an agency’s mission. At some sites, agency officials told us that a key responsibility of the maintenance staff is to keep the facilities up and running, and they praised their staff for creating work-arounds that allow agency staff, despite problems, to continue to work to accomplish the agency’s mission. At some of the sites we visited, the costs included in the backlog estimate were to replace basic systems—such as electrical, heating, and air- conditioning systems and roofs—that have exceeded their expected useful lives. The staff said that they spend a lot of time, effort, and money to patch these systems and keep them going, which allows the agency to continue to operate but it is not efficient. In addition, the failure of one of these systems at a critical location could adversely affect an agency’s mission. At the sites we visited, we did not identify or hear of any instances in which an agency’s mission had been significantly hampered as a result of a repair and maintenance backlog. Most of the examples cited affected operations and maintenance costs and staff’s quality of life or raised concerns about the potential for a failure that would adversely affect/hinder an agency’s mission. Agency officials at some of the sites told us that the effect of their repair and maintenance backlog is difficult to see, because the maintenance staff have prioritized projects that directly affect the mission and have done an excellent job of keeping the facility operating while facing increased repair needs. For example, officials at one site told us that repair and maintenance are often deferred on facilities that do not directly affect the site’s mission. As shown in figure 1, a maintenance shed has been allowed to deteriorate and now has rotting wood and missing shingles on the roof. According to officials, the shed has not been repaired because funding has been spent on more mission- critical facilities. Repair and maintenance backlogs can lead to higher costs because affected assets are generally not operating as efficiently as possible. At some sites, officials showed us building systems that are 30 or more years old that they are trying to keep operational. Newer systems, such as heating and air-conditioning systems, could operate more efficiently, provide more reliable service to the tenants, and reduce operating costs. In addition, overall maintenance costs increase when a roof that is due for replacement is repeatedly patched rather than replaced. At one site we visited, leaking steam pipes are creating a hazard as hot steam is released. The leaks are also increasing operating costs for energy, water, and maintenance chemicals because additional cold water must be heated to make new steam and must also be chemically treated. Officials said that repairing the steam distribution system is not critical to the site’s mission and the leaking steam pipes mostly just increase operating costs. A project to repair the steam system has been proposed for about 10 years and would cost about $7 million. We found that maintenance staff sometimes devise creative solutions, such as the system that the maintenance staff at one site we visited set up to funnel water from a roof leak into a water bottle that then directs the water to a drain. This solution stopped the water from further damaging the building and leaking into areas occupied by staff while deferring the cost of correcting the problem. We saw one building that had been flooded, from which some offices had to be evacuated due to the water and subsequent mold growth. Maintenance staff at one site we visited had to move staff from a building where the floor was beginning to rot into another building with little available space, which they described as “squeezing in” the staff. Repair and maintenance backlogs can interrupt agencies’ work. Officials at one site told us that the age of the fire alarm systems contributed to an increase in false fire alarms. The fire alarm systems are old and beyond their useful life expectancy and part of the agency’s identified backlog. In addition, some alarms were triggered when air conditioning systems were restarted causing changes in air pressure and velocity and dust blown into the air stream. During each alarm, staff had to stop working and leave the building. As a result, the site lost labor time and concerns arose about staff becoming complacent and not taking the fire alarms seriously. Replacement of the fire alarm system in each building on the site is underway. The fire alarm system replacement is scheduled to be completed in all buildings in 2011. We heard from several officials that while they prioritize their work based on the expected impact an incident might have on the agency’s mission, they cannot necessarily predict when or where an incident might occur. At one agency, officials told us that it is standard operating procedure to cover sensitive equipment during off hours to protect it from dust, debris, moisture, humidity, and unexpected incidents. Covering equipment is one way to mitigate the risk of damage to equipment from repair and maintenance backlogs. At one site we visited, officials said that a cooling coil from an old heating, ventilation, and air-conditioning system that is part of their backlog leaked water into a clean room that contained multimillion dollar equipment. Fortunately, the equipment was covered with a tarp and the leak was down the perimeter wall, not on the equipment. However, had the equipment gotten wet, it could have been severely damaged and directly affected the agency’s ability to carry out its mission. Many believe that the overall condition of the federal government’s real property assets continues to deteriorate, and it is difficult to predict when or where an incident might occur that would severely impact an agency’s mission. However, governmentwide information on the estimated costs to repair and maintain agencies’ real property assets that are important to their missions is not currently available. The tens of billions of dollars that agencies have reported to us in backlogs or in their financial statements as deferred maintenance associated with their real property does not capture the federal government’s true fiscal exposure. The flexibility that agencies were given to facilitate their reporting of repair costs in FRPP and deferred maintenance in their financial statements has resulted in estimates that include different items. Trying to use current estimates to understand the government’s fiscal exposure related to real properties backlogs in some cases would understate and in other cases overstate the exposure. For example, agencies may understate the government’s exposure if they have estimated only the cost of correcting assets in the poorest condition or if they have incomplete information about the condition of their assets. Conversely, they may overstate the government’s exposure if they include costs associated with repair and maintenance projects they do not plan to do or include the costs of those projects that would not impact the agency’s mission even if completed. In addition, the requirement to report on all assets has resulted in agencies reporting estimated repair and maintenance costs associated with projects they do not plan to undertake because, for example, they intend to demolish the asset or expect there to always be projects with a higher priority. With information that reflects the government’s fiscal exposure from repairing and maintaining its real property that is important to its mission, decisionmakers are better positioned to address future costs. To provide a realistic estimate of the government’s fiscal exposure resulting from repair and maintenance backlogs and minimize the potential for duplicative reporting requirements, we recommend that the Deputy Director for Management, Office of Management and Budget, in conjunction with FRPC and in consultation with FASAB, explore the potential for developing a uniform reporting requirement in the FRPP that would capture the government’s fiscal exposure related to real property repair and maintenance. Such a reporting requirement should include a standardized definition of repair and maintenance costs related to all assets that agencies determine to be important to their mission and therefore capture the government’s fiscal exposure related to its real property assets. We provided a draft of this report to OMB, DOD, DOE, DOI, GSA, NASA, and VA for review and comment. OMB generally concurred with the report and agreed with our recommendation. OMB’s letter is contained in appendix II. DOD, DOE, DOI, GSA, NASA, and VA provided technical clarifications, which we incorporated where appropriate. In addition to its technical comments, DOD also raised some concerns about our recommendation to OMB. DOD was concerned that we recommended OMB develop a new uniform federal reporting requirement, based in part on an inaccurate and misleading characterization of DOD’s condition rating process. We recommend that OMB, in conjunction with FRPC and in consultation with FASAB, explore the potential for developing a uniform reporting requirement in the FRPP that would capture the government’s fiscal exposure related to real property repair and maintenance. Our recommendation is based on the lack of governmentwide information specifically related to the costs to repair and maintain those real property assets that are important to the agencies missions. We believe it is important for OMB to explore the potential of capturing such information to quantify the government’s fiscal exposure in this area. Through the incorporation of DOD’s technical comments, we have clarified our discussion of DOD’s condition rating process and DOD informed us that we have accurately described its process. DOD’s letter is contained in appendix III. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Director and Deputy Director of OMB, the Secretaries of Defense, Energy, the Interior, and Veterans Affairs, and the Administrators of GSA and NASA. Additional copies will be sent to interested congressional committees. We also will make copies available to others upon request, and the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objectives were to (1) describe how agencies estimate their repair and maintenance backlogs, (2) determine how agencies manage their backlogs and the expected future changes in maintenance and repair backlogs, and (3) identify how backlogs have affected some facilities. To accomplish our objectives, we reviewed the six agencies that each told us in 2007 that they had over $1 billion in repair and maintenance backlogs associated with their held assets: the Departments of Defense, Energy, the Interior, and Veterans Affairs, the General Services Administration, and the National Aeronautics and Space Administration. For each agency, we interviewed headquarters officials, reviewed agency documents, obtained data on repair and maintenance backlogs for the agency’s held assets, and visited two agency sites to determine how the sites estimate and manage their backlogs as well as the extent to which the sites’ missions have been affected by their backlog. In selecting sites to visit, working with our Applied Research and Methods team, we reviewed agency inventory and performance measurement data from the Federal Real Property Profile (FRPP), including information on the condition of each real property asset, issued by the Federal Real Property Council as well as data on deferred maintenance and repair needs from the agencies. We performed our site visits in two geographic areas of the country—the Washington, D.C./Virginia/Maryland area and the San Francisco Bay area in California—because each agency had significant sites in these areas. Within the geographic locations, using FRPP data, we determined the average condition for each agency’s assets and then selected sites that (1) were at or near the average condition of the agency’s assets, and (2) reported a high repair and maintenance backlog compared to other sites in average or near-average condition. Our criteria for selecting each agency included asset types and uses—focusing on core assets, geographic location, quantitative indicators (such as asset value, condition index, and amounts of backlog), and the mission dependency ranking for assets. The information from our site visits is illustrative and cannot be generalized to sites agencywide. We also interviewed officials from the Office of Management and Budget (OMB) because it oversees the implementation of Executive Order 13327, which addresses federal real property management. We reviewed guidance documents related to this order and obtained relevant agency data from OMB implementing the order. Additionally, we interviewed officials from the Federal Accounting Standards Advisory Board (FASAB) to obtain information on FASAB’s accounting standards for required governmentwide reporting of deferred maintenance by agencies in their annual financial statements. We reviewed these FASAB standards, examined agencies’ current reporting of their deferred maintenance to meet the standards, and consulted with our Financial Management and Assurance team about the standards. We also reviewed relevant GAO reports—especially those related to our designation, in 2003, of federal real property as a high-risk area because of long-standing problems— problems that included alarming backlogs of repair and maintenance in federal facilities. While the definition of real property includes land, our review focused on buildings and structures and excluded land because backlogs are generally associated with buildings (such as offices and hospitals) or structures (such as airfields or ports). We conducted this performance audit from September 2007 through October 2008 in accordance with generally accepted audit standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We determined the data were sufficiently reliable for the purposes of this report. In addition to the contact person named above, Nancy Boardman, Maria Edelstein, Elizabeth Eisenstadt, Carol Henn, Yumiko Jolly, and John W. Shumann also made key contributions to this report.
|
In 2003, GAO designated federal real property as a high-risk area. In 2007, GAO reported that real-property-holding agencies and the administration had made progress toward managing their real property, but underlying problems, such as backlogs in repair and maintenance, still existed and six agencies reported having over $1 billion in repair and maintenance backlogs. Owning real property creates a fiscal exposure for the government from the expectation that agencies will incur future maintenance and operations costs. GAO was asked to (1) describe how six agencies estimate their repair and maintenance backlogs, (2) determine how these agencies manage their backlogs and the expected future changes in these backlogs, and (3) identify how backlogs have affected operations at some sites. GAO reviewed agency documents, interviewed officials, and visited two sites at each of the six agencies. The six agencies that GAO reviewed all periodically assess the condition of their assets to identify needed repairs and maintenance but then use different methods to define and estimate their repair and maintenance backlogs. As a result, the agencies' estimates are not comparable. Three of the six agencies--the Departments of Energy (DOE), the Interior (DOI), and the National Aeronautics and Space Administration (NASA)--defined their backlogs as work that was identified to correct deficiencies. A fourth agency, the Department of Veterans Affairs (VA), also defined its backlog as work identified to correct deficiencies, but VA's backlog included only work on systems, such as mechanical and plumbing systems, found to be in poor or critical condition. The General Services Administration (GSA) and the Department of Defense (DOD) both did not track a backlog. Instead, GSA calculated its reinvestment liability--the cost of repairs and maintenance needed now and in the next 10 years. DOD assigned a quality rating to each facility which was based on the ratio of repair costs to the asset's value. The backlog estimates do not necessarily reflect the costs the agencies expect to incur to repair and maintain assets essential to their missions or to avert risks to their missions. For example, these estimates could understate an agency's backlog because they are based on industry-standard costs, or could overstate an agency's backlog because they include inactive assets that are not essential to the agency's mission or may be demolished. The six agencies GAO reviewed generally manage their backlogs as part of their overall real property management and expect the size of their future backlogs to increase. Agencies focus on maintaining and repairing real property assets that are critical to their missions and have processes to prioritize maintenance and repair items based on the effects those items may have on their missions, regardless of whether the items are considered part of the backlogs. For example, VA officials told us that their first priority is to perform maintenance and repairs at places that directly affect patient care, such as operating rooms. Agencies are using strategies such as demolishing assets that are no longer needed to reduce their overall backlogs. However, agency officials generally expect their backlogs to increase as the federal portfolio of real property continues to age and construction costs increase. At the six agencies GAO reviewed, officials have managed their facility repairs and maintenance to minimize the impact of their backlogs on their operations. Officials said that postponing repairs and maintenance generally leads to higher operating and maintenance costs and short-term inconveniences, but they have managed the risks so that the agencies can continue to accomplish their missions. For example, maintenance costs increase when a roof that is due for replacement is repeatedly patched rather than replaced. While several officials said their maintenance staffs have been able to limit the impact of backlogs on operations, they cautioned that there is a real potential for an incident to adversely affect an agency's mission. At one site GAO visited, a multimillion-dollar piece of equipment could have been damaged by a leak from an air conditioning system if it had not been covered with a tarp.
|
Mobilization is the process of assembling and organizing personnel and equipment, activating or federalizing units and members of the National Guard and Reserves for active duty, and bringing the armed forces to a state of readiness for war or other national emergency. It is a complex undertaking that requires constant and precise coordination between a number of commands and officials. Mobilization usually begins when the President invokes a mobilization authority and ends with the voluntary or involuntary mobilization of an individual Reserve or National Guard member. Demobilization is the process necessary to release from active duty units and members of the National Guard and Reserve components who were ordered to active duty under various legislative authorities. Mobilization and demobilization times can vary from a matter of hours to months depending on a number of factors. For example, many air reserve component units are required to be available to mobilize within 72 hours while Army National Guard brigades may require months of training as part of their mobilizations. Reserve component members’ usage of accrued leave can greatly affect demobilization times. Actual demobilization processing typically takes a matter of days once the member arrives back in the United States. However, since members earn 30 days of leave each year, they could have up to 60 days of leave available to them at the end of a 2-year mobilization. DOD has six reserve components: the Army Reserve, the Army National Guard, the Air Force Reserve, the Air National Guard, the Naval Reserve, and the Marine Corps Reserve. Reserve forces can be divided into three major categories: the Ready Reserve, the Standby Reserve, and the Retired Reserve. The Ready Reserve had approximately 1.2 million National Guard and Reserve members at the end of fiscal year 2003, and its members were the only reservists who were subject to involuntary mobilization under the partial mobilization declared by President Bush on September 14, 2001. Within the Ready Reserve, there are three subcategories: the Selected Reserve, the Individual Ready Reserve (IRR), and the Inactive National Guard. Members of all three subcategories are subject to mobilization under a partial mobilization. At the end of fiscal year 2003, DOD had 875,072 Selected Reserve members. The Selected Reserve’s members included individual mobilization augmentees—individuals who train regularly, for pay, with active component units—as well as members who participate in regular training as members of National Guard or Reserve units. At the end of fiscal year 2003, DOD had 274,199 IRR members. During a partial mobilization, these individuals—who were previously trained during periods of active duty service—can be mobilized to fill requirements. Each year, the services transfer thousands of personnel who have completed the active duty or Selected Reserve portions of their military contracts, but who have not reached the end of their military service obligations, to the IRR. However, IRR members do not participate in any regularly scheduled training, and they are not paid for their membership in the IRR. At the end of fiscal year 2003, the Inactive National Guard had 2,138 Army National Guard members. This subcategory contains individuals who are temporarily unable to participate in regular training but who wish to remain attached to their National Guard unit. Appendix II contains additional information about end strengths within the various reserve components and different categories. Most reservists who were called to active duty for other than normal training after September 11, 2001, were mobilized under one of the three legislative authorities listed in table 1. On September 14, 2001, President Bush declared that a national emergency existed as a result of the attacks on the World Trade Center in New York City, New York, and the Pentagon in Washington, D.C., and he invoked 10 U.S.C. § 12302, which is commonly referred to as the “partial mobilization authority.” On September 20, 2001, DOD issued mobilization guidance that, among a host of other things, directed the services as a matter of policy to specify in initial orders to Ready Reserve members that the period of active duty service under 10 U.S.C. § 12302 would not exceed 12 months. However, the guidance allowed the service secretaries to extend orders for an additional 12 months or remobilize reserve component members under the partial mobilization authority as long as an individual member’s cumulative service did not exceed 24 months under 10 U.S.C. § 12302. It further specified that “No member of the Ready Reserve called to involuntary active duty under 10 U.S.C. 12302 in support of the effective conduct of operations in response to the World Trade Center and Pentagon attacks, shall serve on active duty in excess of 24 months under that authority, including travel time to return the member to the residence from which he or she left when called to active duty and use of accrued leave.” The guidance also allowed the services to retain members on active duty after they had served 24 or fewer months under 10 U.S.C. § 12302 with the member’s consent if additional orders were authorized under 10 U.S.C. § 12301(d). Combatant commanders are principally responsible for the preparation and implementation of operation plans that specify the necessary level of mobilization of reserve component forces. The military services are the primary executors of mobilization. At the direction of the Secretary of Defense, the services prepare detailed mobilization plans to support the operation plans and provide forces and logistical support to the combatant commanders. The Assistant Secretary of Defense for Reserve Affairs, who reports to the Under Secretary of Defense for Personnel and Readiness, is to provide policy, programs, and guidance for the mobilization and demobilization of the reserve components. The Chairman of the Joint Chiefs of Staff, after coordination with the Assistant Secretary of Defense for Reserve Affairs, the secretaries of the military departments, and the commanders of the Unified Combatant Commands, is to advise the Secretary of Defense on the need to augment the active forces with members of the reserve components. The Chairman of the Joint Chiefs of Staff also has responsibility for recommending the period of service for units and members of the reserve components ordered to active duty. The service secretaries are to prepare plans for mobilization and demobilization and to periodically review and test the plans to ensure the services’ capabilities to mobilize reserve forces and to assimilate them effectively into the active forces. Within the constraints of the existing mobilization authorities and DOD guidance, the services have flexibility as to how, where, and when they conduct mobilization and demobilization processing. Unit readiness also affects time frames. For example, air reserve component units, which must be ready to deploy on short notice, generally complete their mobilization processing much quicker than Army units that have been funded at low levels under the Army’s tiered readiness concept. However, higher-priority units may take longer to complete demobilization processing because, at the end of the processing, they must be ready to deploy on short notice again. The reserve components differ in their approaches to the mobilization and demobilization processes. The Army and Navy use centralized approaches, mobilizing and demobilizing their reserve component forces at a limited number of locations. The Army utilizes 15 primary sites that it labels “power projection platforms” and 12 secondary sites called “power support platforms.” The Navy has 15 geographically dispersed Navy Mobilization Processing Sites but is currently using only 5 of these sites because of the relatively small numbers of personnel who are mobilizing and demobilizing. By contrast, the Air Force uses a decentralized approach, mobilizing and demobilizing its reserve component members at their home stations—135 for the Air Force Reserve and 90 for the Air National Guard. The Marine Corps uses a hybrid approach. It has five Mobilization Processing Centers to centrally mobilize individual reservists and is currently using three of these centers. However, the Marine Corps uses a decentralized approach to mobilize its units. Selected Marine Corps Reserve units do most of their mobilization processing at their home stations and then report to their gaining commands, such as the First or Second Marine Expeditionary Force located at Camp Pendleton and Camp Lejeune, respectively. Individuals usually demobilize at the same location where they mobilized and units generally demobilize at Camp Pendleton or Camp Lejeune. See appendix III for a listing of the services’ mobilization and demobilization sites. Figure 1 shows reserve component usage on a per capita basis since fiscal year 1989 and demonstrates the dramatic increase in usage that occurred after September 11, 2001. It shows that the ongoing usage—which includes support to operations Noble Eagle, Enduring Freedom, and Iraqi Freedom—exceeds the usage rates during the 1991 Persian Gulf War in both length and magnitude. While reserve component usage increased significantly after September 11, 2001, an equally important shift occurred at the end of 2002. Following the events of September 11, 2001, the Air Force initially used the partial mobilization authority more than the other services. However, service usage shifted in 2002, and by the end of that year, the Army had more reserve component members mobilized than all the other services combined. Since that time, usage of the Army’s reserve component members has continued to dominate DOD’s figures. On June 30, 2004, the Army had about 131,000 reserve component members mobilized while the Air Force had about 12,000, the Marine Corps about 9,000, and the Navy about 3,000. Under the current partial mobilization authority, DOD increased not only the numbers of reserve component members that it mobilized, but also the length of the members’ mobilizations. The average mobilization for Operations Desert Shield and Desert Storm in 1990-91 was 156 days. However, by December 31, 2003, the average mobilization for operations Noble Eagle, Enduring Freedom, and Iraqi Freedom was 319 days, or double the length of mobilizations for Desert Shield and Desert Storm. By March 31, 2004, the average mobilization for the three ongoing operations had increased to 342 days, and that figure is expected to continue to rise. Section 1074f of Title 10, United States Code required that the Secretary of Defense establish a system to assess the medical condition of members of the armed forces (including members of the reserve components) who are deployed outside of the United States, its territories, or its possessions as part of a contingency operation or combat operation. It further required that records be maintained in a centralized location to improve future access to records and that the Secretary establish a quality assurance program to evaluate the success of the system in ensuring that members receive pre- and post-deployment medical examinations and that recordkeeping requirements are met. DOD policy requires that the services collect pre- and post-deployment health information from their members and submit copies of the forms that are used to collect this information to the Army Medical Surveillance Activity (AMSA). Initially, deployment health assessments were required for all active and reserve component personnel who were on troop movements resulting from deployment orders of 30 continuous days or greater to land-based locations outside the United States that did not have permanent U.S. military medical treatment facilities. However, on October 25, 2001, the Assistant Secretary of Defense for Health Affairs updated DOD’s policy and required deployment-related health assessments for all reserve component personnel called to active duty for 30 days or more. The policy specifically stated that the assessments were to be done whether or not the personnel were deploying outside the United States. Both assessments use a questionnaire designed to help military health care providers in identifying health problems and providing needed medical care. The pre-deployment health assessment is generally administered at the service mobilization site or unit home station before deployment, and the post-deployment health assessment is completed either in theater before redeployment to the servicemember’s home unit or shortly after redeployment. On February 1, 2002, the Chairman of the Joint Chiefs of Staff issued updated deployment health surveillance procedures. Among other things, these procedures specified that servicemembers must complete or revalidate the health assessment within 30 days prior to deployment. The procedures also stated that the original completed health assessment forms were to be placed in the servicemember’s permanent medical record and a copy “immediately forwarded to AMSA.” Both the pre- and the post-deployment assessments were originally two- page forms, but on April 22, 2003, the post-deployment assessment was expanded to four pages “in response to national interest in the health of deployed personnel, combined with the timing and scope of current deployments.” Both forms include demographic information about the servicemember, member-provided information about the member’s general health, and information about referrals that are issued when service medical providers review the health assessments. The pre- deployment assessment also includes a final medical disposition that shows whether the member was deployable or not, and the post- deployment assessment contains additional information about the location where the member was deployed and things that the member might have been exposed to during the deployment. Compared with the two-page post-deployment form, the four-page form captures more-detailed information on deployment locations, potentially hazardous exposures, and medical symptoms the servicemember might have experienced. It also asks a number of mental health questions. Examples of the forms can be found in appendix V. Our August 2003 report found the following: DOD’s process to mobilize reservists after September 11, 2001, had to be modified and contained numerous inefficiencies. DOD did not have visibility over the entire mobilization process primarily because it lacked adequate systems for tracking personnel and other resources. The services have used two primary approaches—predictable operating cycles and formal advance notification—to provide time for units and personnel to prepare for mobilizations and deployments. Mobilizations were hampered because one-quarter of the Ready Reserve was not readily available for mobilization or deployment. Over 70,000 reservists could not be mobilized because they had not completed training requirements, and the services lacked information needed to fully use the 300,000 previously trained IRR members. We made a number of recommendations in our report to enhance the efficiency of DOD’s reserve component mobilizations. DOD generally concurred with the recommendations and has mobilization reengineering efforts under way to make the process more efficient. The Army has also taken steps to improve the information it maintains on IRR members. The availability of reserve component forces to meet future requirements is greatly influenced by DOD’s implementation of the partial mobilization authority and by the department’s personnel policies. Furthermore, many of DOD’s policies that affect mobilized reserve component personnel were implemented in a piecemeal manner, and were focused on the short-term needs of the services and reserve component members rather than on long-term requirements and predictability. The availability of reserve component forces will continue to play an important role in the success of DOD’s missions because requirements that increased significantly after September 11, 2001, are expected to remain high for the foreseeable future. As a result, there are early indicators that DOD may have trouble meeting predictable troop deployment and recruiting goals for some reserve components and occupational specialties. On September 14, 2001, DOD broke with its previous pattern of invoking successive authorities by invoking a partial mobilization authority without a prior Presidential Reserve call-up. In addition, DOD was considering a change in its implementation of the partial mobilization authority. The manner in which DOD implements the mobilization authorities currently available can result in either an essentially unlimited supply of forces or running out of forces available for deployment, at least in the short term. While DOD has consistently used two mobilization authorities to gain involuntary access to its reserve component forces since 1990, the methods of using the authorities has not remained constant. On August 22, 1990, the President invoked Title 10 U.S.C. Section 673b, allowing DOD to mobilize Selected Reserve members for Operation Desert Shield. The provision was then commonly referred to as the Presidential Selected Reserve Call-up authority and is now called the Presidential Reserve Call- up authority. This authority limits involuntary mobilizations to not more than 200,000 reserve component members at any one time, for not more than 270 days, for any operational mission. On January 18, 1991, the President invoked Title 10 U.S.C. Section 673, commonly referred to as the “partial mobilization authority,” thus providing DOD with additional authority to respond to the continued threat posed by Iraq’s invasion of Kuwait. The partial mobilization authority limits involuntary mobilizations to not more than 1 million reserve component members at any one time, for not more than 24 consecutive months, during a time of national emergency. During the years between Operation Desert Shield and September 11, 2001, DOD invoked a number of separate mission- specific Presidential Reserve Call-ups for operations in Bosnia, Kosovo, Southwest Asia, and Haiti. The department did not seek a partial mobilization authority for any of these operations, and it continued to view the partial mobilization authority as the second step in a series of progressive measures to address escalating requirements during a time of national emergency. Unlike the progressive use of mobilization authorities following Iraq’s 1990 invasion of Kuwait, after the events of September 11, 2001, the President invoked the partial mobilization authority without a prior Presidential Reserve Call-up. Since the partial mobilization for the Global War on Terrorism went into effect in 2001, DOD has used both the partial mobilization authority and the Presidential Reserve Call-up authority to involuntarily mobilize reserve component members for operations in the Balkans. The manner in which DOD implements the partial mobilization authority affects the number of reserve component forces available for deployment. When DOD issued its initial guidance concerning the partial mobilization authority in 2001, it limited mobilization orders to 12 months but allowed the service secretaries to extend the orders for an additional 12 months or remobilize reserve component members, as long as an individual member’s cumulative service under the partial mobilization authority did not exceed 24 months. Under this cumulative implementation approach, it is possible for DOD to run out of forces during an extended conflict such as the long-term Global War on Terrorism. During our review, DOD was already facing some critical personnel shortages. To expand its pool of available personnel, DOD was considering a policy shift that would have authorized mobilizations of up to 24 consecutive months under the partial mobilization authority with no limit on cumulative months. Under the considered approach, DOD would have been able to mobilize its forces for less than 24 months; send them home; and then remobilize them, repeating this cycle indefinitely and providing essentially an unlimited flow of forces. Many of DOD’s policies that affect mobilized reserve component personnel were implemented in a piecemeal manner and were not linked within the context of a strategic framework to meet the organizational goals. Overall, the policies reflected DOD’s past use of the reserve components as a strategic force rather than DOD’s current use of the reserve component as an operational force to respond to the increased requirements of the Global War on Terrorism. Faced with some critical shortages, the policies focused on the short-term needs of the services and reserve component members rather than on long-term requirements and predictability. This approach was necessary because the department had not developed a strategic framework that identified DOD’s human capital goals necessary to meet organizational requirements. Without a strategic framework, OSD and the services made several changes to their personnel policies to increase the availability of the reserve components for the longer-term requirements of the Global War on Terrorism, and predictability declined for reserve component members. Specifically, reserve component members have faced uncertainties concerning the likelihood of their mobilizations, the length of their service commitments, the length of their overseas rotations, and the types of missions that they would be asked to perform. The partial mobilization authority allows DOD to involuntarily mobilize members of the Ready Reserve, including the IRR; but after the President invoked the partial mobilization authority on September 14, 2001, DOD and service policies encouraged the use of volunteers and generally discouraged the involuntary mobilization of IRR members. DOD officials said that they could meet requirements without using the IRR and stated that they wanted to focus involuntary mobilizations on the paid, rather than unpaid members, of the reserve components. However, our August 2003 report documented the lack of predictability that resulted from the volunteer and IRR policies. These policies were disruptive to the integrity of Army units because there was a steady flow of personnel among units. Personnel were transferred from nonmobilizing units to mobilizing units that were short of personnel, and when the units that had supplied the personnel were later mobilized, they in turn were short of personnel and had to draw personnel from still other units. Despite the DOD and Army reluctance to use the IRR, the Chief of the Army Reserve has advocated using the IRR to cut down on the disruptive cross-leveling and individual mobilizations that have been breaking Army units. From September 11, 2001 to May 15, 2004, the Army Reserve mobilized 110,000 of its reservists, but more than 27,000 of these reservists were cross-leveled and mobilized with units that they did not normally train with. Furthermore, because the IRR makes up almost one-quarter of the Ready Reserve, policies that discourage the use of the IRR will cause members of the Selected Reserve to share greater exposures to the hazards associated with national security and military requirements. Moreover, policies that discourage the use of the IRR could cause DOD’s pool of available reserve component personnel to shrink by more than 200,000 personnel. Since our August 2003 report, Navy and Air Force officials have stated that they still have not involuntarily mobilized any members of their IRRs. In our August 2003 report, we noted that the Air Force’s reluctance to use any of its more than 44,000 IRR members resulted in unfilled requirements for more than 9,000 personnel to guard Air Force bases. However, the Army National Guard agreed to provide personnel from its Selected Reserve units to fill these requirements. Faced with critical personnel shortages, the Army recently changed its policy and now plans to make limited use of its IRR. To date, the Marine Corps has made the most extensive use of its IRR, capitalizing on the willingness of many members to voluntarily return to active duty. At various times since September 2001, all of the services have had “stop- loss” policies in effect. These policies are short-term measures that increase the availability of reserve component forces while decreasing predictability for reserve component members who are prevented from leaving the service at the end of their enlistment periods. Stop-loss policies are often implemented to retain personnel in critical or high-use occupational specialties. Appendix VI contains a summary of the services’ stop-loss policies that have been in effect since September 2001. The only stop-loss policy in effect when we ended our review was an Army policy that applied to units rather than individuals in critical occupations. Under that policy, Army reserve component personnel were not permitted to leave the service from the time their unit was alerted until 90 days after the date when their unit was demobilized. Because many Army units undergo several months of training after being mobilized but before being deployed overseas for 12 months, stop-loss periods can reach 2 years or more. According to Army officials, a substantial number of reserve component members have been affected by the changing stop-loss policies. As of June 30, 2004, the Army had over 130,000 reserve component members mobilized and thousands more alerted or demobilized less than 90 days. Because they have remaining service obligations, many of these reserve component members would not have been eligible to leave the Army even if stop-loss policies had not been in effect. However, from fiscal year 1993 through fiscal year 2001, Army National Guard annual attrition rates exceeded 16 percent and Army Reserve rates exceeded 25 percent. Even a 16 percent attrition rate means that 20,800 of the mobilized 130,000 reserve component soldiers would have left their reserve component each year. If attrition rates exceed 16 percent or the thousands of personnel who are alerted or who have been demobilized for less than 90 days are included, the numbers of personnel affected by stop-loss policies would increase even more. When the Army’s stop-loss policies are eventually lifted, thousands of servicemembers could retire or leave the service all at once and the Army’s reserve components could be confronted with a huge increase in recruiting requirements. Following DOD’s issuance of guidance concerning the length of mobilizations in September 2001, the services initially limited most mobilizations to 12 months, and most services maintained their existing operational rotation policies to provide deployments of a predictable length that are preceded and followed by standard maintenance and training periods. However, the Air Force and the Army later increased the length of their rotations, and the Army increased the length of its mobilizations as well. These increases in the length of mobilizations and rotations increased the availability of reserve component forces but decreased predictability for individual reserve component members who were mobilized and deployed under one set of policies but later extended as a result of the policy changes. The Air Force’s operational concept prior to September 2001, was based on a rotation policy that made reserve component forces available for 3 out of every 15 months. After September 2001, the Air Force was not able to solely rely on its normal rotations and had to involuntarily mobilize large numbers of reserve component personnel. From September 11, 2001, to March 31, 2004, the Air National Guard mobilized more than 31,000 personnel, and the Air Force Reserve mobilized more than 24,000 personnel. Although most Air Force mobilizations were for 12 months or less, more than 10,000 air reserve component members had their mobilization orders extended to 24 months. Most of these personnel were in security-related occupations. Since September 2001, the Air Force has not been able to return to its normal operating cycle, and in June 2004, the Air Force Chief of Staff announced that Air Force rotations would be increased to 4 months beginning in September 2004. Before September 2001, the Army mobilized its reserve component forces for up to 270 days under the Presidential Reserve Call-up authority, and it deployed these troops overseas for rotations that lasted about 6 months. When it began mobilizing forces under the partial mobilization authority in September 2001, the Army generally mobilized troops for 12 months. However, troops that were headed for duty in the Balkans continued to be mobilized under the Presidential Reserve Call-up authority. When worldwide requirements for both active and reserve component Army troops increased, the Army changed its Balkan rotation schedules. These schedules had been published years in advance to allow poorly resourced Guard and Reserve units time to train and prepare for the deployments. As a result of the changed schedules, some reserve component units did not have adequate time to prepare and train for Balkan rotations and then deploy for 6 months and still remain with the 270-day limit of the Presidential Reserve Call-up authority. Therefore, the Army mobilized some reserve component units under the partial mobilization authority so that they could undergo longer training periods prior to deploying for 6 months under the Presidential Reserve Call-up authority. The Army’s initial deployments to Iraq and Afghanistan were scheduled for 6 months, just like the overseas rotations for the Balkans. Eventually, the Army increased the length of its rotations to Iraq and Afghanistan to 12 months. This increased the availability of reserve component forces, but it decreased predictability for members who were mobilized and deployed during the transition period when the policy changed. Because overseas rotations were extended to 12 months and mobilization periods must include mobilization and demobilization processing time, training time, and time for the reserve component members to take any leave that they earn, the change in rotation policy required a corresponding increase in the length of mobilizations. DOD has a number of training initiatives underway that will increase the availability of its reserve component forces to meet immediate needs. Servicemembers are receiving limited training—called “cross-training”— that enables them to perform missions that are outside their area of expertise. In the Army, field artillery and air defense artillery units have been trained to perform some military police duties. Air Force and Navy personnel received additional training and are providing the Army with additional transportation assets. DOD also has plans to permanently convert thousands of positions from low-use career fields to stressed career fields. While it remains to be seen how the uncertainty resulting from changing personnel policies will affect recruiting, retention, and the long-term viability of the reserve components, there are already indications that some portions of the force are being stressed. For example, the Army National Guard failed to meet its recruiting goal during 14 of 20 months and ended fiscal year 2003 approximately 7,800 soldiers below its recruiting goal. (Appendix VII contains additional information about reserve component recruiting results.) The Secretary of Defense established a force-planning metric to limit involuntary mobilizations to “reasonable and sustainable rates” and has set the metric for such mobilizations at 1 year out of every 6. However, on the basis of current and projected usage, it appears that DOD may face difficulties achieving its goal within the Army’s reserve components in the near term. Since February 2003, the Army has continuously had between 20 and 29 percent of its Selected Reserve members mobilized. To illustrate, even if the Army were to maintain the lower 20 percent mobilization rate for Selected Reserve members, it would need to mobilize one-fifth of its selected reserve members each year. DOD is aware that certain portions of the force are used much more highly than others, and it plans to address some of the imbalances by converting thousands of positions from lower- demand specialties into higher-demand specialties. However, these conversions will take place over several years and even when the positions are converted, it may take some time to recruit and train people for the new positions. It is unclear how DOD plans to address its longer-term personnel requirements for the Global War on Terrorism, given its current implementation of the partial mobilization authority. Requirements for reserve component forces increased dramatically after September 11, 2001, and are expected to remain high for the foreseeable future. In the initial months following September 11, 2001, the Air Force used the partial mobilization authority more than the other services, and it reached its peak with almost 38,000 reserve component members mobilized in April 2002. However, by July 2002, Army mobilizations surpassed those of the Air Force, and since December 2002, the Army has had more reserve component members mobilized than all the other services combined. Although many of the members who have been called to active duty under the partial mobilization authority have been demobilized, as of March 31, 2004, approximately 175,000 of DOD’s reserve component members were still mobilized and serving on active duty. According to OASD/RA data, about 40 percent of DOD’s Selected Reserve forces had been mobilized from September 11, 2001, to March 31, 2004. By June 30, 2004, the number of mobilized reserve component members had dropped to about 155,000—consisting of about 131,000 members from the Army, about 12,000 from the Air Force, about 9,000 from the Marine Corps, and about 3,000 from the Navy. However, the number of mobilized reserve component forces is projected to remain high for the foreseeable future. DOD projects that over the next 3 to 5 years, it will continuously have 100,000 to about 150,000 reserve component members mobilized, and the Army National Guard and Army Reserve will continue to supply most of these personnel. While Army forces may face the greatest levels of involuntary mobilizations over the next few years, all the reserve components have career fields that have been highly stressed. For example, the Navy and Marine Corps have mobilized 60 and 100 percent of their enlisted law enforcement specialists and 48 and 100 percent of their intelligence officers, respectively. The Air National Guard and Air Force Reserve mobilized 64 and 93 percent of their enlisted law enforcement specialists and 71 and 86 percent of their installation security personnel, respectively. As noted earlier, during our review, DOD was considering changing its implementation of the partial mobilization authority from its current approach, which limits mobilizations to 24 cumulative months, to an approach that would have limited mobilizations to 24 consecutive months to expand its pool of available personnel. However, in commenting on a draft of this report, DOD stated that it would retain its current cumulative implementation approach. Policies that limit involuntary mobilizations on the basis of cumulative service make it difficult for mobilization planners, who must keep track of prior mobilizations in order to determine which forces are available to meet future requirements. This can be particularly difficult now, when many mobilizations involve individuals or small detachments rather than complete units. In June 2004, DOD noted that about 30,000 reserve members had already been mobilized for 24 months. Under DOD’s cumulative approach, these personnel will not be available to meet future requirements. The shrinking pool of available personnel, along with the lack of a strategic plan to clarify goals regarding the reserve component force’s availability, will present the department with additional short- and long-term challenges as it tries to fill requirements for mobilized reserve component forces. In its comments on a draft of our report, DOD did not elaborate on how it expected to address its increased personnel requirements. The Army was not able to efficiently execute its mobilization and demobilization plans, because mobilization and demobilization site officials faced uncertainties concerning demands for facilities, turnover among support personnel, and the arrival of reserve component forces. The efficiency of the mobilization and demobilization process depends on advanced planning and coordination. However, the Army’s planning assumptions did not accurately portray the availability of installations and personnel needed to fully accommodate the high number of mobilizations and demobilizations. Moreover, officials did not always have adequate notice to prepare for arriving troops. The Army has several initiatives under way to improve facility and support personnel availability, but it has not taken a coordinated approach to evaluate all the support costs associated with mobilization and demobilization at alternative sites in order to determine the most efficient options under the operating environment for the Global War on Terrorism. The efficiency of the mobilization and demobilization processes depends largely on advanced planning in the form of facility preparation and coordination between installation planners, support personnel, and arriving reserve component units or individuals. The Army attempts to take the necessary planning steps to support efficient servicemember mobilization and demobilization. For example, installations that are responsible for mobilizing and demobilizing reserve component forces attempt to contact units or personnel prior to their arrival, so that both the reserve component forces and the supporting installations can be prepared to meet the Army’s mobilization and demobilization requirements. During these contacts, reserve component forces are told what records, and equipment to bring to the mobilization and demobilization sites and installation officials obtain information—such as the number of arriving troops and the anticipated time of their arrival— that is necessary for them to efficiently prepare for the arrival of the reserve component forces. With this information, the installations can plan where they will house, feed, and train the troops; how they will transport the troops around the installation and to their final destinations; and when they will send the troops for medical and dental screenings and administrative processing. Army guidance, which states that units are to demobilize at the same installation where they mobilized, can add to the efficiency of the demobilization process. Efficiencies can be realized because many of records created during the mobilization process or copies of the records are kept at the installation and can be used to do advanced preparation before the demobilizing unit arrives at the installation. Army officials told us that since September 11, 2001, most units have demobilized at the same installation where they mobilized, but there have been some exceptions. For example, officials from the First U.S. Army told us that they had mobilized a unit for Operation Iraqi Freedom at Fort Rucker, Alabama, and were demobilizing the unit at Fort Benning, Georgia. They also told us that troops who had mobilized at Fort Stewart, Georgia, were going to be demobilizing at Fort Dix, New Jersey, after a deployment to Kosovo. To accommodate shifts in demobilization sites, the new sites must, among other things, obtain reserve component unit medical, dental, and personnel records and must coordinate the return of individual equipment, such as helmets, sleeping bags, packs, and canteens that were issued at the original mobilization site. With adequate notice and planning, alternate demobilization sites can demobilize reserve component units without any major problems. However, officials at Fort Lewis, Washington, told us that their support personnel had to reconstruct dental records for 150 soldiers in an engineer unit that had originally mobilized at Fort Leonard Wood, Missouri. Because the Army’s goal is to complete demobilization processing within 5 days of a unit’s arrival at a demobilization site, the Fort Lewis personnel were not able to wait for the arrival of the dental records, which had been sent from Fort Leonard Wood via routine mail rather than overnight delivery. The Army’s planning assumptions did not accurately portray the availability of installations and personnel needed to fully accommodate the high number of mobilizations and demobilizations. Specifically, planning assumptions regarding the availability of facilities for mobilization and demobilization were outdated, and did not anticipate the availability of specially designed reserve component support units to provide much of the medical, training, logistics, and processing support needed to mobilize and demobilize reserve component units and individuals. The Army’s planning assumptions regarding the availability of facilities for mobilization and demobilization were outdated. Consequently, installations sometimes lacked the support infrastructure needed to accommodate both active and reserve component mobilizing and demobilizing members in an equitable manner. The Army’s mobilization and demobilization plans assumed that active forces would be deployed abroad, thus vacating installations when reserve component forces were mobilizing and often demobilizing. These assumptions are important because they served as a basis to help the Army determine which installations would have the necessary support facilities to serve as its primary and secondary mobilization sites. Most of the Army’s primary mobilization sites are installations that serve as home bases for large active Army units. For example, three of the Army’s primary sites that we visited—Fort Lewis, Washington; Fort Stewart, Georgia; and Fort Hood, Texas—are home to two active combat brigades, an active combat division, and two active combat divisions, respectively, along with hosts of other active forces. Fort Hood alone has about 42,000 active troops assigned to the installation. Under the Army’s plans, reserve component units were assigned mobilization and demobilization sites so that units could plan in advance for their mobilizations. Units often developed relationships with the installations where they expected to mobilize and in many cases the units trained at these installations. However, because active units had not vacated many of the Army’s major mobilization sites as planned, mobilizing reserve component forces were moved to sites where they had not trained and where they had not developed any relationships that could have increased the mobilizations’ efficiency. As a result, transportation distances for personnel and equipment were increased, and extra coordination was required with the mobilization sites and sometimes even within units. For example, the 116th Cavalry Brigade from the Idaho Army National Guard, which had planned to mobilize at Fort Lewis, Washington, was mobilized at Fort Bliss, Texas, because, among other things, adequate housing facilities were not available at Fort Lewis. Another Army National Guard Brigade, which was mobilized at Fort Bragg, North Carolina, faced increased coordination challenges because one of its battalions was mobilized at Fort Drum, New York, and another at Fort Stewart, Georgia, because of a lack of available facilities at Fort Bragg. At mobilization and demobilization sites where active forces remained on the installations while reserve component forces were mobilizing or demobilizing, competing demands sometimes led to housing inequities for the reserve members. For example, at the installations we visited, single active component personnel who were permanently assigned to the installation were generally housed in barracks where two to four people shared a room, but mobilized reserve component personnel were often housed in open-bay barracks. At some installations, reserve component personnel were housed in tents, gymnasiums, or older buildings that were designed for short training periods rather than mobilization periods that could last several months. The presence of large active duty and reserve contingents on the same installations at the same time also strained training and medical facilities. Fort Hood officials said that the scheduling and rescheduling of training ranges presented major challenges during 2003 when the installation was preparing to deploy both its active divisions and a large group of reserve component forces at the same time. To address these facility challenges, the Army has begun a number of housing and facility construction and renovation projects. The Army did not anticipate that its reserve component units that support mobilizations and demobilizations would be needed beyond 24 months under a partial mobilization authority. When the Army created these units to provide much of the medical, training, logistics, and processing support to mobilizing and demobilizing units and individuals, it anticipated that the need for these units would be commensurate with the mobilization authority in place at the time. However, the Army is now facing support requirements for a long-term Global War on Terrorism, while being limited to involuntary mobilizations of not more than 24 cumulative months under the department’s implementation of the partial mobilization authority. The underlying assumptions of the Army’s mobilization and demobilization plans were that (1) only a small portion of these reserve component support personnel would be required to support the limited mobilizations associated with a Presidential reserve call-up and (2) all of the reserve component support personnel would be available for as long as needed to support the large mobilizations for long periods that are associated with full or total mobilizations. The Army’s plans called for these support personnel to be among the first reserve component members mobilized and the last demobilized. Army officials assumed that, under a partial mobilization authority, these reserve component support forces would be able to support large mobilizations and demobilizations, or support mobilizations for long periods, but not large mobilizations for long periods. As a result of the large requirements for the Army’s reserve component forces, many pieces of the reserve component support units were mobilized for 12 months early in the Global War on Terrorism and then later extended. Some support personnel were mobilized for 24 months under the partial mobilization authority—which, under DOD’s current implementation, limits involuntary mobilizations to 24 cumulative months—and then sent home. However, many others agreed to stay on active duty under voluntary mobilization orders after they had served 24 months under the partial mobilization authority. For example, from a 27- person support detachment that was mobilized for 12 months at Fort Hood, in October 2001, 13 people were later extended for a full 2 years, and 6 of these reserve component personnel accepted voluntary orders at the end of their mobilizations. At Fort Lewis, two reserve component support detachments—one with 59 personnel and the other with 17—were mobilized in September 2001. Both detachments served on active duty for 2 full years. In July 2004, more than 1,100 reserve component support personnel were on voluntary orders or mobilization extensions. Even though some reserve component support personnel have voluntarily extended their orders, the Army is facing a shortage of mobilization and demobilization support personnel because the Global War on Terrorism is lasting beyond the time when most reserve component support personnel would reach their 24-month mobilization points. Consequently, the Army has begun hiring civilian and contractor replacement personnel to provide medical, training, logistics, and administrative support at its mobilization and demobilization sites. Planners and the installations that mobilize and demobilize reserve component forces have not always had adequate notice to prepare for arriving troops. Without advanced notice, officials at these sites are forced to make last-minute adjustments that may result in the inefficient use of installation facilities and support personnel. Our prior report highlighted problems associated with the lack of advance notice in March 2003. While officials at the installations we visited noted that the level of advance notice had improved significantly for mobilizing troops, they still faced some short-notice mobilizations. According to Army officials, the Army is currently providing 30 days’ notice to all involuntarily mobilized troops. However, as of May 2004 some units that are being mobilized under the partial mobilization authority are still being mobilized with less than 30 days advance notice. According to Army Reserve officials, each member of these units signs a volunteer waiver stating that he or she agrees to be mobilized with less than 30 days advance notice. Therefore, the Army does not violate its policy concerning advance notice for involuntary mobilizations. Installation planning officials told us that they typically receive shorter notice and less definitive information concerning the arrival of demobilizing troops. Typically, when an installation mobilizes a reserve component unit, the installation planner records the length of unit mobilization orders. Depending on the length of unit mobilization orders and the resulting time available for leave at the end of the orders, installation planners begin to anticipate the return of the unit up to several months before the unit’s orders expire. The planners said that they use a variety of formal and informal means to try to ascertain the specific arrival dates and times for demobilizing troops but that the arrival dates and times are often uncertain right up until the time the troops arrive. This is because their different sources of information sometimes provide conflicting information. The planners generally begin their search for information about units returning to their installation using the automated systems within DOD’s Joint Operations Planning and Execution System. A primary source of information is the time-phased force and deployment data (TPFDD). Installation planning officials told us that the TPFDD is most valuable in providing them with information on large units with orders that have not changed and that return as complete units. However, the planners stated that it is not uncommon for the TPFDD to be incorrect or outdated because changes are constantly being made to redeployment schedules, particularly for small units or individuals. One source of such last-minute changes stems from changes in travel arrangements. According to DOD officials, when there are empty seats available on planes departing the theater of operations, small units are often placed on the planes at the last minute to fill the empty seats. However, these changes are not always captured in the TPFDD or DOD’s other automated systems. For example, while we were visiting Fort Lewis, planning officials were trying to determine which unit or units might be returning to Fort Lewis to go through demobilization processing along with the 502nd Transportation Company and 114th Chaplain detachment that were scheduled to arrive on March 1, 2004. Neither the TPFDD nor the other automated tracking systems that were available to planning officials at Fort Lewis provided definitive answers. As a result of contacts through informal channels, at 11:20 a.m. on March 1, 2004, Fort Lewis officials thought that 21 people from the 854th Quartermaster Unit were going to arrive at McChord Air Force Base—located adjacent to Fort Lewis, just south of Tacoma, Washington—40 minutes later. Due to the lack of reliable information, Fort Lewis officials could not finalize planning arrangements. For example, because they did not know whether to expect male or female soldiers, they could not finalize housing plans for the soldiers. Nor did they know whether the unit was bringing weapons with them or what types of weapons they might have, and thus transportation personnel and personnel in the arms room at Fort Lewis were placed on standby. A check with McChord officials at 11:50 a.m. revealed that there were no inbound flights. At 3:53 p.m. Fort Lewis officials had confirmation that the soldiers would be arriving at 9:35 p.m. and that there were 19 additional personnel from an unknown unit or units on the plane with the 21 soldiers from the 854th Quartermaster unit. By 4:12 p.m. on March 1, 2004, the Fort Lewis officials had canceled the scheduled demobilization processing times for the 854th because information showed that the unit would not arrive until 7:42 a.m. on the following day, March 2, 2004. Planning officials had to make several other adjustments to planned schedules before the Quartermaster unit finally arrived. Moreover, the 502nd Transportation Company and 114th Chaplain detachment, which had been visible through DOD’s formal systems, also arrived later than the expected March 1 date. Sometimes, planning officials receive information from informal sources, such as family members of deployed personnel. During our visit to Fort Lewis, officials had begun tracking an inbound Army National Guard military police unit on the basis of information received from an informal information source. This unit became visible to the planning officials when the wife of one of the soldiers, who also served as the unit’s family readiness coordinator, notified the officials that her husband and 11 other unit personnel had left Iraq, were in Germany, and were scheduled to fly to Washington state on a commercial airliner the next day. The coordinator also provided the Fort Lewis officials with the names and social security numbers for all 12 returning soldiers. According to Fort Lewis officials, in the past, 2 out of every 10 units have arrived at the site without notification. The demobilization planning officials at Fort Lewis summed up their visibility situation by stating, “Most valuable information on unit redeployment is not official, rather it is word of mouth.” Demobilization officials at other installations said that they also had good visibility over large units that returned as planned but said that it was difficult to plan for the arrival of small units and individuals. During our visit to Fort McCoy, Wisconsin, 28 soldiers—a 9-soldier unit, and a 19- soldier unit—arrived at the site unexpectedly. In addition, officials at Fort Hood said that they were able to track the evacuation of medical patients from the theater to stabilization hospitals, such as the Walter Reed Army Medical Center in Washington, D.C., or Brooke Army Medical Center in Texas, but that they often lost visibility of the patients during the last leg of their journey back to Fort Hood. They also said that visibility was sometimes a problem for individual soldiers who had reached the end of their enlistments or mobilization orders and were returning as individuals on “freedom flights” because the automated tracking systems were designed primarily to handle units and not individuals. Without updating its planning assumptions regarding the availability of facilities for mobilization and demobilization, the Army has begun a number of costly short- and long-term efforts to address facility and support personnel shortfalls at individual mobilization and demobilization sites. Furthermore, the Army has not taken a coordinated approach to evaluate all the support costs associated with mobilization and demobilization at alternative sites in order to determine the most efficient options under the operating environment for the Global War on Terrorism. The use of civilian and contractor personnel to provide mobilization and demobilization support may not provide cost-effective alternatives to some reserve component support personnel. To address housing and other facilities shortages at mobilization and demobilization sites, the Army has embarked on a number of facility construction and renovation projects without updating its planning assumptions regarding the availability of facilities and personnel. As a result, the Army risks spending money inefficiently on projects that may not be located where the need is greatest. Until the Army updates its planning assumptions, it cannot determine whether the current primary and secondary mobilization sites are the best sites for future mobilizations and demobilizations. The Army has a variety of individual construction and renovation plans under way. For example, Fort Hood has a $5.1 million project to renovate its open-bay, cinder block barracks that have been used to house reserve component soldiers at North Fort Hood. Fort Stewart has a similar project under way to renovate National Guard barracks to current mobilization standards. Fort Stewart has also submitted plans to build a new facility to house its reserve component members with medical problems. The Army also has developed a plan to construct several new buildings that would be used to house active and reserve component soldiers who are undergoing training. In addition, these facilities would be available for use when reserve component units were mobilizing and demobilizing. This project has not yet been funded or approved by Army leadership. However possible sites for these buildings include Fort Lewis, Washington; Fort Hood, Texas; Fort Bliss, Texas; Fort Carson, Colorado; Fort Polk, Louisiana; Fort Riley, Kansas; and Fort Stewart, Georgia. The construction of some of these facilities could begin as early as 2006. However, a recent GAO review found that DOD’s efforts to improve facility conditions are likely to take longer than expected because of competing funding pressures. The review also found that without periodic reassessments of project prioritization, projects that are important to an installation’s ability to accomplish its mission and improve servicemembers’ quality of life could continually be deferred. The Army also has plans to make greater use of one of its secondary mobilization sites. The Army is planning to make greater use of Camp Shelby, Mississippi, a secondary mobilization site that is owned by the state of Mississippi. Because this site does not have active troops and has a large housing capacity, the Army plans to use this site to relieve immediate pressures on its primary mobilization sites. However, Camp Shelby’s facilities are not new, and they are in need of repairs. Housing units are made of cinder block, have no heating or air conditioning, and were not designed for year-round accommodations. According to officials from the U.S. Army Forces Command, Camp Shelby will require $22 million in federal funding for renovations. Key officials at the mobilization and demobilization sites we visited expressed a number of concerns about the availability of civilian or contractor personnel and the abilities of these personnel to provide capable, flexible replacements for the reserve component support personnel at a reasonable cost. In addition, the Army has not fully analyzed the costs of hiring these civilian and contractor personnel at its existing mobilization sites compared with the costs and feasibility of hiring support personnel at an alternative set of mobilization and demobilization sites. At Fort Stewart, Georgia, officials said that there is a very small civilian population in the area from which to draw replacement personnel. They also noted that the rural nature of the area and lack of cultural amenities makes it difficult to attract physicians and other highly paid specialists who support the mobilization and demobilization process. Officials at Fort Lewis had already replaced many of their medical support personnel at the time of our visit but acknowledged that even with the large population of the Seattle-Tacoma area to draw upon, they were still facing challenges in the hiring of physician assistants and nurse practitioners. The commander of the hospital at Fort Hood said that the hospital had issued a contract to try to fill its nurse shortage, but the only result from the contract was that civilian nurses at the hospital left the hospital to work for a contractor that paid them more. Thus, the net result was that the hospital did not fill its shortages, and it kept the same nurses but paid the contractor more for their services. Even when civilian or contractor personnel are available to replace reserve component personnel, the replacements may not be able to provide the same capability or flexibility as reserve component support personnel. During our visit to Fort Hood, officials told us that over the past 10 years, the Army had repeatedly looked at the option of using civilian or contractor medical evacuation teams to replace reserve component support personnel. However, the option has not been adopted because the civilians would not be able to fly into live-fire training areas or under blackout conditions without costly Army flight training. Fort Lewis officials raised similar concerns about the limited abilities of civilian helicopter rescue teams during our prior review. In addition, officials at mobilization and demobilization sites said that reserve component support personnel provided them with great flexibility in dealing with the unexpected arrival of mobilizing or demobilizing soldiers. Reserve component personnel are technically available 24 hours per day, 7 days per week. Therefore, processing could be scheduled for any hour and any day without regard to overtime considerations. During our visits, we observed several cases where civilian personnel left their processing sites at the end of their scheduled workday but reserve component personnel stayed until all processing was completed. In addition to the civilian replacements for reserve component medical support personnel, the Army is looking for replacements for the reserve component personnel who performed administrative processing, logistic, training, and other support functions within its garrison support units. The Army’s Installation Management Agency (IMA) is working with the Army Contracting Agency to develop short- and long-term replacement solutions. The long-term solution is an “Indefinite Delivery/Indefinite Quantity” contract that will allow installation commanders to place task orders to hire or contract workers for particular support functions. According to contracting officials, this contract will be awarded on or about October 1, 2004. IMA is programmed to receive $238 million for this contract in fiscal year 2005. By July 2004, IMA had received $56 million and had allocated $48.4 million to 12 different mobilization sites to cover the transition period until the long-term contract is in place. This interim funding can be used to expand existing installation support contracts or to hire temporary workers. In addition, the Army is also keeping over 1,100 reserve component members on active duty to help cover the transition period. DOD’s ability to effectively manage the health status of its reserve component members is limited because (1) its centralized database has missing and incomplete health records and (2) it has not maintained full visibility over reserve component members with medical issues. During our review of health data collected at AMSA, DOD’s central data collection point, we found that the database had missing and incomplete records. Not all of the required health information collected from reserve component members had reached AMSA. Furthermore, only some of the health assessment information that had reached AMSA had been entered into the centralized database. DOD policy guidance issued in October 2001 directed the services to submit pre- and post-deployment health forms to AMSA, but not all of the required health information collected from reserve component members during their mobilization and demobilization processing has reached DOD’s central collection activity at AMSA. Table 2 compares the number of personnel who were mobilized from September 11, 2001, to March 30, 2004 with the number of pre-deployment health assessments submitted to AMSA from November 1, 2001—the first month when health assessments were required for all mobilizing and demobilizing reserve component members—to March 31, 2004. The differences between the mobilization numbers and the pre-deployment health assessment numbers provide indications that assessment forms may be missing for members of all six of DOD’s reserve components. However, because the mobilization and health assessment data cover slightly different time periods and come from different sources, we could not determine the exact extent of the mismatch. When we investigated the cause of the large differences between Marine Corps numbers, officials told us that the Marine Corps’ guidance did not require them to submit pre-deployment health assessments to AMSA. The officials cited guidance, in the form of two Marine Corps administrative messages that directed responsible officials to submit post- deployment health assessments to AMSA. However, the administrative messages neglect to direct the officials to submit pre-deployment health assessments. Furthermore, no additional administrative messages have addressed the requirement for pre-deployment assessments. As a result, the AMSA database contained only 2,104 pre-deployment health assessments but 11,499 post-deployment health assessments for Marine Corps reservists. Another possible reason why the Marine Corps has not submitted pre- deployment health assessments to AMSA is because the Marine Corps lacks a mechanism for overseeing the submission of these forms. There is no current Marine Corps requirement for tracking and reporting the submission of theses forms in the Deployment Health Quality Assurance program. In a March 12, 2004, memorandum to the Deputy Assistant Secretary of Defense for Force Health Protection and Readiness, the Marine Corps reported the number and percentage of post-deployment health assessments that were completed but did not report any information on pre-deployment assessments. Officials at Camp Lejeune told us that they would begin submitting pre- deployment health assessments to AMSA after we raised the issue during a site visit in 2004 and the issuance of subsequent Navy Department guidance. Officials told us that the Marine Corps Medical Office had drafted new guidance to address this requirement, but the guidance had not been issued by the time we drafted our report in July 2004 and we were not able to determine the cause of the delay or to verify that new guidance would adequately address the submission of pre-deployment health assessments. Navy health assessment submissions to AMSA also appear to be incomplete. According to Navy procedures, all mobilizing reservists are to complete their pre-deployment health assessment at their local reserve center before they report to their Navy Mobilization Processing Sites. In such cases, the reserve center is required to send the reservists’ completed pre-deployment health assessment forms to AMSA. Therefore, Navy data collection is only done centrally at the Navy Mobilization Processing Stations in limited cases when a reservist arrives without a completed pre- deployment health assessment. We did not visit any individual Navy Reserve centers to verify the submission of pre-deployment health assessments. We did review Navy Quality Assurance program guidance and found that it does not address the submission of pre-deployment health assessments. However, the guidance specifies that a 90 percent submission rate is considered satisfactory for post-deployment health assessments. In September 2003, we reported similar findings for the active forces. Specifically, we found that DOD did not maintain a complete, centralized database of active servicemember health assessments and immunizations. Following our 2003 review, DOD established a deployment health quality assurance program to improve data collection and accuracy. The department’s first annual report documenting issues relating to deployment health assessments will not be available until February 2005, and it is too early to determine the extent to which the new quality assurance program will provide effective oversight to address data submission problems from each of the services and their reserve components. While the services are not in complete compliance with the requirement to submit pre- and post-deployment assessments to AMSA, the number of assessments in the database has grown significantly. According to AMSA officials, the database contained about 140,000 assessments at the end of 1999, and grew to about 1 million assessments by May 2003, and 1,960,125 by June 2004. Not all the records in the AMSA database contained complete information, thus limiting the amount of meaningful analysis that can be conducted. Health assessment database records sometimes did not include information that could be used to identify the causes of various medical problems. Nonetheless, the available data indicate that the overall pre- and post-deployment health status of mobilized reserve component members was good. Records in the health assessment database sometimes did not include key information or information that could be used to identify the causes of various medical problems. For example, records were sometimes missing information on the servicemember’s deployability and the specific types of medical referrals that were given to members with referrals. Almost 6 percent of the nearly 240,000 pre-deployment health assessments we reviewed did not have the servicemember’s deployability status recorded in the AMSA database. As shown in table 3, the missing data ranged from less than 4 percent for the Army National Guard to almost 18 percent for the Naval Reserve. For the remaining records with the deployability status recorded, 93 percent of the servicemembers were deployable. Nondeployable rates ranged from less than 1 percent in the Air National Guard to more than 9 percent in the Army Reserve. Other data showed that most of the nondeployable personnel had medical conditions that clearly made them nondeployable, and which did not require medical referrals. According to medical officials, some of these personnel, such as those who had suffered multiple heart attacks, should have been discharged prior to the time that they received their mobilization orders. Others had temporary conditions, such as broken bones and pregnancies that did not warrant medical discharges but made them nondeployable at the time of their assessment. Detailed referral information could assist the services in determining and addressing the factors that cause reserve component members to be nondeployable; however, these data were often missing in AMSA’s database. About 99 percent of the pre- and post-deployment assessments we reviewed showed whether or not reserve component members had been given a medical referral, but less than 44 percent of the records with referrals contained detailed information about the type of referral that was given to the member (eye, ear, cardiac, mental health, etc.). One reason for the incomplete health assessment records we found at AMSA at the time of our data draw in March 2004 is that some of the health assessments were entered into AMSA’s database by hand. According to the officer in charge of AMSA, records in the database with detailed referral data had been submitted electronically rather than as paper copies, which the installations are required to forward to the centralized database. Generally, electronic data are sent to AMSA after being collected in one of two different ways: (1) from applications that are available at Army installations and over the Internet and (2) on stand-alone laptop computers and hand-held personal digital assistant units, which collect data in the theater and elsewhere. All electronic data are transmitted to AMSA and updated immediately upon receipt. Because of workload demands, when paper forms were received at AMSA, database personnel captured only a data element indicating if a referral was needed, not the specific type of referral indicated. In addition, when there was a backlog of four page paper post-deployment health assessments to be entered into the database, data entry personnel were entering only the first and last pages of the form and not the middle two pages. Because of this, at various times the data that have been collected from servicemembers may not be available for analysis. However, as of June 2004, the officer in charge of AMSA said that AMSA had no backlog of paper forms to be entered into the centralized database and had 15 people working full-time to process pre- and post-deployment health assessment forms. Furthermore, he estimated that by the end of July 2004, they would be caught up with the entries of the middle pages of the post-deployment health assessments that had been skipped earlier. Still, there is a delay between receipt of the form and its entry into the database. The AMSA Chief said the paper forms take approximately 1 week for processing, scanning, and entering data. All of the reserve components have the capability to submit the health assessments electronically, including detailed medical referral information. Many Army and Air Force servicemember health assessments are now transmitted electronically, and detailed information is captured into the database from those forms. The Army has been sending electronic health assessment data for active and reserve servicemembers to AMSA since July 2003. Although the Army is capable of transmitting all of its forms electronically, only about 52 percent of its forms submitted from January 1, 2003, to May 3, 2004, had been submitted electronically. The Air Force began sending electronic data to AMSA in June 2004. The Navy and Marine Corps have established a working group that is currently evaluating several options and developing an implementation plan. DOD established a deployment health task force to make recommendations by late April 2004 on completing all pre- and post- deployment health assessments electronically. However, the Deployment Health Task Force is continuing its work to expedite and monitor progress toward the electronic capture of deployment health assessment forms. Even though electronic submission of the health assessment forms from the mobilization and demobilization sites to AMSA’s centralized database would expedite the inclusion of key data for meaningful analysis, increase accuracy of the reported information, and lessen the burden of sites forwarding paper copies and the likelihood of lost information, DOD has not set a timeline for the services to electronically submit the health assessment forms to the centralized database. Table 4 shows that 98 percent of the reserve component members reported that they were in good to excellent health when they completed their pre-deployment health assessments. The Army Reserve had the lowest number—97 percent—of servicemembers considering themselves in good to excellent health. Table 4 also shows that the total referral rate that resulted from the pre- deployment health assessments was 5 percent but ranged from 1 percent for the Air National Guard to 6 percent for the Army Reserve. Table 5 shows that even after deployment, a high percentage of reserve component members thought they were in good to excellent health. However, a comparison of table 4 with table 5 shows that numbers had generally declined from pre-deployment levels. In particular, the percentage of personnel who rated their health as good to excellent declined from 98 percent to 93 percent. The Army Reserve had the lowest percentage of servicemembers who considered themselves in good to excellent health during their post-deployment assessments—89 percent-- while the Air National Guard and Air Force Reserve had the highest percentage of servicemembers who considered themselves in good to excellent health after deployment—98 percent. Moreover, the percentage of medical referrals jumped to 21 percent on the post-deployment health assessments. A comparison of tables 4 and 5 shows that the referral rate that resulted from post-deployment assessments was quadruple the 5 percent referral rate from pre- deployment assessments. There were also differences between the services, in that reserve component personnel from the Army and Marine Corps received higher referral rates, as would be expected for ground forces, than those in the Air Force and the Navy. The percentages ranged from 8 percent for the Air National Guard to 30 percent for the Army Reserve. Table 6 shows that when reserve component members completed their post-deployment health assessments, almost half of them chose the same category to characterize their overall health as they had chosen on their pre-deployment health assessment. The table shows that almost 14 percent of the personnel who completed both pre- and post-deployment health surveys believed that their health had improved enough to warrant recharacterizations of their original assessments. The table above also shows that 39 percent of the personnel who completed both the pre- and post-deployment health surveys reported that their health had declined between the assessments. Reserve component personnel from the Army and Marine Corps experienced larger declines than those of the Navy and Air Force. Some of the services could not maintain visibility over reserve component members with medical issues because they could not adequately track those personnel, which contributed to problems for those personnel. In the Army, the lack of tracking information for reserve component personnel with medical issues contributed to problems for those personnel. In the Army, the lack of visibility over reservists with medical issues resulted in housing and pay problems for some personnel. The Air Force has also lost visibility of some reservists with medical issues, which has resulted in lengthy periods of time without resolution to their medical issues. Reserve component personnel who have been involuntarily mobilized, along with members who are voluntarily serving on active duty, may experience medical problems for a variety of reasons. Some are injured during combat operations; others become injured or sick during the course of their training or routine duties; and others have problems that are identified during medical appointments, physicals, or health assessments and other medical screenings. Our review focused on reserve component members with medical problems that were expected to keep them from being returned to full duty or from being demobilized within 30 days. This group contained reserve component members with a wide variety of injuries and ailments. During our visits to mobilization and demobilization sites, we spoke with reserve component members who had suffered heart attacks or combat wounds, as well as to members with knee and ankle injuries, diabetes, chronic back pain, and mental health problems. The services have used different policies and procedures to accommodate involuntarily mobilized reserve component personnel who have long-term medical problems. In some cases, the services have left the members on their original mobilization orders and then extended those orders as necessary. In other cases, the services have switched the members to voluntary orders or offered the members the option to leave active duty and have their medical conditions cared for through the Department of Veterans Affairs. The dramatic increase in the use of the reserve components has led to a dramatic increase in the numbers of reserve component members on active duty with medical problems. For example, our analysis of data from the more than 239,500 pre-deployment health assessments collected in the AMSA database from November 2001 through March 2004 showed that over 15,100 members, or almost 7 percent, were not deployable; almost 14,800 of these members came from the Army’s reserve components. Prior to a change in Army policy in October 2003, personnel who were mobilized and found to be non-deployable were kept on active duty until (1) their medical problems had been resolved and they were returned to full duty or (2) they had been referred to a medical board process and discharged from the Army. (See appendix VIII for additional information on the services’ medical evaluation boards.) As a result of its October 2003 policy change, the Army was able to demobilize personnel who were found to be nondeployable within the first 25 days of their mobilizations. This policy change helped to reduce the inflow of reserve component personnel on active duty with medical problems who were identified during the pre-deployment health-screening process. However, the reserve component members who were already on active duty with medical problems that had been identified during the pre- deployment health-screening process were not demobilized when the policy changed. In addition, significant numbers of reserve component personnel continued to experience medical problems as a result of injuries or illnesses that occurred (1) after the members had been mobilized for 25 days and (2) as a result of problems that were identified during their post- deployment heath assessments. As a result, on July 14, 2004, the Army still had over 4,000 reserve component personnel on active duty with medical problems. Although Army officials said that the primary responsibility that these soldiers had was to go to their medical treatment so they could get well, many of the soldiers did not require daily medical treatment. As a result, these soldiers often do other work ranging from temporary details to maintain base facilities to longer-term jobs such as working at mobilization processing sites or working as mechanics in installation motor pools. Initially, issues associated with the care of Army personnel with medical problems were usually dealt with at the Army installation where the servicemember was mobilized or demobilized and at nearby medical treatment facilities. As the numbers of reserve component personnel with medical problems increased, the Army found that it had difficulty maintaining visibility of such personnel, resulting in some housing, pay, and other problems for the personnel. For example, at Fort Stewart, Georgia, reserve component soldiers with medical problems were being housed in open-bay, cinder block barracks that did not have heating or air conditioning. In addition, shower and bathroom facilities were in separate, nearby buildings. These facilities normally housed National Guard personnel during their 2-week annual training periods. Following media attention to these conditions, the Under Secretary of Defense for Personnel and Readiness issued a memorandum that established housing standards for personnel with medical problems in October 2003. During our visit to Fort Stewart, in November 2003 we found that the soldiers with medical problems were being housed in accordance with the updated standards, which required climate-controlled quarters that included integrated bathroom facilities. The Army also created a servicewide medical-status tracking system during the summer of 2003. This system generates regular weekly reports on the numbers of reserve component members on active duty with medical problems, their locations, and the length of time that they have been receiving medical care. Following up on allegations in 2003 that medical treatment was taking too long, and that soldiers were missing their scheduled medical appointments, investigators at Fort Stewart also found that case managers were needed to track the care of the soldiers with medical problems and that a command structure was needed to manage the other needs and duties of these personnel. At the time of our visit, Fort Stewart had 15 case managers in place, and a new command and control structure had been set up to manage the soldiers with medical problems. However, officials told us that they still faced challenges with the management and care of these soldiers because the group was so large. On November 19, 2003, there were 661 reserve component members with medical problems at Fort Stewart; as of July 14, 2004, there were 349 members. The lack of visibility and tracking also caused problems for members with medical problems at Fort Lewis, Washington. Army procedures called for reserve component members on involuntary mobilization orders to be switched over to voluntary active duty medical extension orders after a long-term medical problem had been identified. The administrative process for issuing these active duty medical extensions was cumbersome, and mechanisms were not in place to effectively track requests for these extensions, which had to be submitted from the units with servicemembers experiencing medical problems to a central office in the Pentagon. When we visited Fort Lewis in March 2004, we found that medical extension orders had expired for 19 of 84 personnel in the medical hold unit. When a servicemember’s orders expire, the member’s pay stops and the member’s dependents lose their health care coverage. After our visit to Fort Lewis, the Army changed its policy concerning active duty medical extensions. On March 6, 2004, the Assistant Secretary of the Army for Manpower and Reserve Affairs issued a policy that provides installations with the ability to issue voluntary orders for up to 180 days for reserve component members with medical problems without going through the cumbersome active duty medical extension process. While the authority to issue these voluntary orders has been delegated to the installation level, the Army is still maintaining visibility over its reserve component personnel with medical problems because these personnel are assigned to units that must report their personnel numbers on a weekly basis. In the Air Force, a lack of central visibility of some reserve component personnel with medical problems who are serving on active duty has resulted in delayed resolution to their medical problems. The Air Force does have central visibility over reserve component personnel with medical problems who remain on their original mobilization orders or receive extensions to those orders. However, the Air Force also allows personnel with medical problems to switch over to voluntary orders. These orders are issued by the Air Force’s major commands. The Air Force can track the number of orders issued and the number of days covered by these orders, but it does not have a mechanism in place to track the numbers of personnel who have medical problems and are serving under these orders. As with many of the reserve component personnel in the Army’s medical hold and holdover units, many of the air reserve component personnel with medical problems are still able to perform significant amounts of work while undergoing their medical treatment or medical discharge processing. While the reservists experiencing medical problems who we interviewed did not identify any difficulties with their housing or their orders, they did identify problems with the amount of time it was taking to resolve their medical issues, much like the problems identified at Fort Stewart prior to the deployment of case managers to that location. At one of the sites we visited, an Air Force reservist told us that he had been in a medical status on voluntary orders for 18 months and did not expect resolution of his case anytime soon. The extent to which such a problem is commonplace is unknown, given the inability of the Air Force to track such personnel. As the Global War on Terrorism is entering its fourth year, DOD officials have made it clear that they do not expect the war to end anytime soon. Furthermore, indications exist that certain components and occupational specialties are being stressed and the long-term impact of this stress on recruiting and retention is unknown. Moreover, although DOD has a number of rebalancing efforts under way, these efforts will take years to implement. Because this war is expected to last a long time and requires far greater reserve component personnel resources than any of the smaller operations of the previous two decades, DOD can no longer afford policies that are developed piecemeal to maximize short-term benefits and must have an integrated set of policies that address both the long-term requirements for reserve component forces and individual reserve component members’ needs for predictability. For example, service rotation polices are directly tied to other personnel policies such as policies concerning the use of the IRR, and the extent of cross training. Policies to fully utilize the IRR would increase the pool of available servicemembers and would thus decrease the length of time each member would need to be deployed based on a static requirement. Policies that encourage the use of cross-training for lesser-utilized units could also increase the pool of available servicemembers and decrease the length of rotations. Until DOD addresses its personnel policies within the context of an overall strategic framework, it will not have clear visibility over the forces that are available to meet future requirements. In addition, it will be unable to provide reserve component members with clear expectations of their military obligations and the increased predictability, which DOD has recognized as a key factor in retaining reserve component members who are seeking to successfully balance their military commitments with family and civilian employment obligations. The Army’s mobilization and demobilization plans contained outdated assumptions about the location of active duty forces during reserve mobilizations and demobilizations. As a result, facilities were not always available to equitably support active and reserve component forces that were collocated on bases that serve as mobilization and demobilization sites. Until the Army updates the assumptions in its mobilization and demobilization plans and therefore recognizes that active and reserve component forces are likely to need simultaneous support at Army installations within the United States, it may not be able to adequately address the support needs of both its active and reserve component forces. The Army has a number of uncoordinated efforts under way to correct the facility infrastructure shortage that has developed. However, these projects are being conducted without considering the long-term requirements and associated costs. In addition, when the Army created medical, training, logistics, and administrative support units that relied heavily on reserve component members, it did not anticipate that it would have to support long-term mobilization requirements for a Global War on Terrorism under a partial mobilization authority. As a result, the reserve component force cannot continue to support mobilizations as DOD currently implements the partial mobilization authority and the Army is now planning to rely on civilians and contractors. However, the Army has not determined the costs and availability of these civilian and contractor personnel. Until the Army makes these determinations, it cannot plan to conduct future mobilizations and demobilizations in the most efficient manner. DOD’s ability to effectively manage the health status of reserve component members has been hampered by a lack of complete information and the inability to track servicemembers with health issues. For example, the AMSA database does not contain a large number of health assessment records for the Marine Corps and lacks complete information from some of the health assessment records that were submitted to the database in a nonelectronic format. Consequently, the deployability status and related health problems of some reserve component members were not discoverable. Until the Marine Corps addresses its data submission problems with updated guidance and a mechanism to oversee the submission of health assessments to the centralized database and until DOD establishes a timeline for the military departments to submit health assessments electronically, DOD and the services will continue to face difficulties in determining and addressing the factors that cause reserve component members to be nondeployable. Moreover, until the Air Force develops a mechanism to track its reserve component members who are on voluntary active duty orders with health problems, it cannot determine whether these personnel are having their health problems addressed in a timely manner. Furthermore, the treatment of the nation’s reserve component members who have served their country and experienced medical problems while on active duty is an important issue for DOD to address. Until DOD gains visibility over the status of all of its reserve component personnel on active duty with medical problems, it cannot effectively oversee their situations and deploy, demobilize, or discharge them. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness, in concert with the service secretaries and Joint Staff, to take the following two actions: develop a strategic framework that sets human capital goals concerning the availability of its reserve component forces to meet the longer-term requirements of the Global War on Terrorism under various mobilization authorities and identify personnel policies that should be linked within the context of the strategic framework. We recommend that the Secretary of Defense direct the Secretary of the Army to take, within the context of establishing DOD’s strategic framework for force availability, the following two actions: update mobilization and demobilization planning assumptions to reflect the new operating environment for the Global War on Terrorism—long-term requirements for mobilization and demobilization support facilities and personnel and the likelihood that active forces will continue to rotate through U.S. bases while reserve component forces are mobilizing and demobilizing and develop a coordinated approach to evaluate all the support costs associated with mobilization and demobilization at alternative sites— including both facility (construction, renovation, and maintenance) and support personnel (reserve component, civilian, contractor, or a combination) costs—to determine the most efficient options; and then update the list of primary and secondary mobilization and demobilization sites as necessary. We also recommend that the Secretary of Defense take the following four actions: direct the Commandant of the Marine Corps to issue updated mobilization guidance that specifically lists the requirement to submit pre-deployment health assessments to AMSA, direct the Commandant of the Marine Corps to establish a mechanism for overseeing submission of pre- and post-deployment assessments to the centralized database, direct the Under Secretary of Defense for Personnel and Readiness, in concert with the service secretaries, to set a timeline for the military departments to electronically submit pre-and post-deployment heath assessments, direct the Secretary of the Air Force to develop a mechanism for tracking reserve component members who are on voluntary active duty orders with medical problems. In written comments on a draft of this report, DOD generally concurred with our recommendations. The Department specifically concurred with our recommendations to (1) update Army mobilization and demobilization planning assumptions to reflect the new operating environment for the Global War on Terrorism; (2) develop a coordinated approach to evaluate all the support costs associated with Army mobilizations and demobilizations at alternative sites—including both facility and support personnel costs—to determine the most efficient options, and then update the list of primary and secondary mobilization and demobilization sites as necessary; (3) issue updated Marine Corps mobilization guidance that specifically lists the requirement to submit pre-deployment health assessments to AMSA; (4) set a timeline for the military departments to electronically submit pre- and post-deployment heath assessments; and (5) develop a mechanism for tracking Air Force reserve component members who are on voluntary active duty orders with medical problems. DOD partially concurred with our other three recommendations. In partially concurring with our recommendation concerning the development of a strategic framework, DOD stated that it has a strategic framework for setting human capital goals, which was established through its December 2002 comprehensive review of active and reserve force mix, its January 2004 force rebalancing report, and other planning and budgeting guidance. However, DOD agreed that it should review and, as appropriate, update its strategic framework. Although the documents cited by DOD lay some of the groundwork needed to develop a strategic framework, these documents do not specifically address how DOD will integrate and align its personnel policies, such as its stop-loss and IRR policies, to maximize its efficient usage of reserve component personnel to meet its overall organizational goals. In partially concurring with our recommendation to identify personnel policies that should be linked within the context of a strategic framework, DOD stated that its September 20, 2001, personnel and pay policy and its July 19, 2002, addendum established personnel policies associated with its strategic framework. DOD also stated that the department should review, and as appropriate, update these policies. We agree that the Office of the Secretary of Defense has issued personnel policies and various guidance and reports concerning its reserve components. However, the policies cited by DOD pre-date the 2002 comprehensive review and 2004 force rebalancing report that were cited as part of the department’s strategic framework. The strategic framework should be established prior to the creation of personnel policies. We continue to believe that DOD’s policies were implemented in a piecemeal manner and focused on short-term needs. For example, our report details service changes to policies concerning the use of the IRR, mobilization lengths, deployment lengths, and service obligations. In partially concurring with our recommendation concerning oversight of the Marine Corps’ pre- and post-deployment health assessments, DOD stated that system improvements are ongoing and that electronic submission of pre- and post-deployment health assessments is possible and highly desirable but may not be practical for every Marine Corps deployment. However, our recommendation was directed at oversight of health assessments regardless of how the assessments are submitted—in paper or electronic form. We continue to believe that the Marine Corps needs to establish a mechanism for overseeing the submission of its pre- and post-deployment health assessments. The other services have established such mechanisms as part of their quality assurance programs. Finally, in commenting on a draft of this report, DOD stated that after reviewing its implementation of the partial mobilization authority, it decided to retain its “24-cumulative month” policy. DOD noted that it had identified significant problems with changing to a 24-consecutive-month approach but did not elaborate on those problems. The final decision concerning the implementation of the partial mobilization authority was not made until after our review ended, and the decision was counter to the decision expected by senior personnel we met with during the course of our review. As noted in our report, with a 24-cumulative-month interpretation of the partial mobilization authority, DOD risks running out of forces available for deployment, at least in the short term. Regardless of DOD’s interpretation of the partial mobilization authority, the department needs to have a strategic framework to maximize the availability of its reserve component forces. For example, usage of the more than 250,000 IRR members can affect rotation policies because the use of these reservists would increase the size of the pool from which to draw mobilized reservists. Therefore, without a strategic framework setting human capital goals, how DOD will continue to meet its large requirements for the Global War on Terrorism remains to be seen. We have modified our report to recognize the decision that DOD made regarding its implementation of the partial mobilization authority. DOD’s comments on our recommendations are included in this report in appendix IX. DOD also provided other relevant comments on portions of the draft report and technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Chairman of the Joint Chiefs of Staff; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http:www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-5559 or [email protected] or Brenda S. Farrell, Assistant Director, at (202) 512-3604 or [email protected]. Others making significant contributions to this report are included in appendix X. To determine how the Department of Defense’s (DOD) implementation of the partial mobilization authority and its personnel polices affect reserve component force availability, we reviewed and analyzed the mobilization authorities that are available under current law, along with personnel policies from the services and Office of the Secretary of Defense. We also collected and analyzed data on DOD’s historical usage of the reserve components and its usage of these forces since September 11, 2001. We analyzed usage trends since the 1991 Persian Gulf War and compared usage rates across services, reserve components, and occupational specialties. We also reviewed DOD documents that addressed the projected future use of reserve component forces and plans to mitigate the high usage of forces within certain occupational specialties. We analyzed the structure of the reserve component forces and evaluated the effects of utilizing or excluding members of the Individual Ready Reserve from involuntary call-ups. We discussed the implementation of mobilization authorities and the effects of various personnel policies with responsible officials from the Joint Chiefs of Staff, Washington, D.C.; Assistant Secretary of Defense for Reserve Affairs, Washington, D.C.; Assistant Secretary of the Army for Manpower and Reserve Affairs, U.S. Army Forces Command, Fort McPherson, Georgia; Air Force Reserve Command, Robins Air Force Base, Georgia; Commandant, Marine Corps (Manpower, Plans, and Policy), Quantico Marine Corps Base, Virginia; and U.S. Army Reserve Command, Fort McPherson, Georgia. During our visits to mobilization and demobilization sites, we also interviewed reserve component members concerning the length of their mobilizations, deployments, and service commitments. To determine how efficiently the Army executed its mobilization and demobilization plans, we interviewed senior and key mobilization officials involved with the mobilization and demobilization processes to document their roles and responsibilities and collect data about the processes. We visited selected sites where the Army conducts mobilization and demobilization processing. At those sites, we observed mobilization and demobilization processing and interviewed responsible Army officials as well as soldiers being processed for mobilization and demobilization at those sites. We collected and analyzed cost data for facility renovation and construction projects. We also collected and analyzed available cost information on the contracts to replace reserve component members with civilian and contractor personnel. Finally, we documented problems that the installations had tracking the arrival of mobilizing and demobilizing troops though their automated systems. We visited five mobilization and demobilization sites. These sites included four installations that supported both active and reserve component troops and one site that supported only reserve component troops. Four of the sites were among the largest in terms of the numbers of reserve component members mobilized and demobilized. One was among the smallest. Specifically we visited the following sites: Fort Stewart, Georgia; Fort Hood, Texas; Fort McCoy, Wisconsin; Fort Lewis, Washington; and Fort McPherson, Georgia. We also interviewed Army officials from the following locations: U.S. Army Forces Command, Fort McPherson, Georgia; First U.S. Army, Fort Gillem, Georgia; Fifth U.S. Army, Fort Sam Houston, Texas; Army Installation Management Activity, Arlington, Virginia; and Army Contracting Agency, Fort McPherson, Georgia. As requested, we also visited sites where the other services conducted mobilization and demobilization processing, but we did not report on the efficiency of the other services’ processes because the numbers of reserve component members who were mobilizing and demobilizing through these sites were insufficient for us to draw any conclusions about the services’ processes. Specifically, we interviewed responsible officials and observed ongoing mobilizations and demobilizations at the following sites: Quantico Marine Corps Base, Virginia; Camp Lejeune Marine Corps Base, North Carolina; Dobbins Air Reserve Base, Georgia; Dover Air Force Base, Delaware; and Navy Mobilization Processing Site Norfolk, Virginia. At some of the demobilization locations, we observed reservists receiving medical, legal, and family support briefings, and interviewed some individuals who had been demobilized, including some on medical extensions. We also walked through and compared facilities used to house active and reserve component personnel, specifically focusing on the facilities used to house personnel with medical problems. We interviewed appropriate officials about facility capacities, and gathered and analyzed information about facility renovations and new construction projects. We obtained and reviewed additional documentation such as mobilization orders, activation checklists, and demobilization processing checklists. We also collected and analyzed reserve component mobilization data, flowcharts, reports, plans, directives, manuals, instructions, and administrative guidance. We reviewed relevant GAO reports and contacted other audit and research organizations regarding their work in the area. We reviewed congressional testimony by Navy officials in which they described steps planned by the Navy to improve its demobilization process, and we followed up on the status of those planned steps with officials at the Navy Mobilization Processing Site Norfolk, Virginia. To examine the extent to which DOD can effectively manage the health status of its mobilized reserve component members, we collected and analyzed data from a variety of sources throughout DOD. We tracked weekly data from the Office of the Assistant Secretary of Defense for Reserve Affairs (OASD/RA), which showed the numbers of Army, Navy, Air Force, and Marine Corps personnel on medical extensions, and the numbers of Army personnel in medical statuses. We also collected, tracked, and analyzed data from the Army’s Office of the Surgeon General. These data showed the numbers of reserve component personnel in medical statuses by installation and by time spent in a medical status. We also reviewed the Army’s projected medical status numbers, the Army’s plans to mitigate future problems, and reports on the lessons that were learned from the medical-related problems that occurred at Fort Stewart during 2003. We also obtained and analyzed information from the Office of the Deputy Assistant Secretary of Defense for Force Health Protection and Readiness, Deployment Health Support Directorate. We collected and reviewed the services’ medical instructions, memoranda, and policies. In addition, we interviewed personnel responsible for the processing, reviewing, and collection of the deployment health assessments at the mobilization and demobilization sites visited. We compared information about the services’ medical and physical evaluation board processes. We discussed these medical issues with responsible officials from Office of the Assistant Secretary of Defense for Reserve Affairs, U.S. Army Medical Department, Army Medical Command, Washington, U.S. Army Forces Command, Fort McPherson, Georgia; First U.S. Army, Fort Gillem, Georgia; Fifth U.S. Army, Fort Sam Houston, Texas; U.S. Army Medical Command, Fort Sam Houston, Texas; Walter Reed Army Medical Center, Washington, D.C.; Winn Army Community Hospital, Fort Stewart, Georgia; Darnall Army Community Hospital, Fort Hood, Texas; Madigan Army Medical Center, Fort Lewis, Washington; Fort McCoy, Wisconsin; Quantico Marine Corps Base, Virginia; Camp Lejeune Marine Corps Base, North Carolina; Navy Mobilization Processing Site, Norfolk, Virginia; Headquarters, United States Air Force Military Policy Division, Air National Guard, Washington, D.C.; Air Force Medical Operations Agency, Washington, D.C.; and Dobbins Air Reserve Base, Georgia. We also interviewed reserve component members who were in medical status at the mobilization and demobilization sites visited. We interviewed hospital commanders and their staff, case managers, medical liaison officers, and officials from the services’ Surgeons General Offices. We interviewed the Chief of the Army Medical Surveillance Activity (AMSA). We discussed the information in the consolidated health assessment database and obtained selected data from all the reserve component member pre- and post-deployment health assessments that were completed from October 25, 2001—when assessments became mandatory for all mobilized reserve component members through March 2004. The data we obtained contained health assessment records for 290,641 reserve component members. For 122,603 members, we obtained only pre-deployment health assessments, for 51,047 members we obtained only post-deployment health assessments, and for 116,991 members we obtained both pre-and post-deployment health assessments. We analyzed the data that we obtained to determine referral, deployability, and exposure rates. We also analyzed data on the self-reported general health of the reserve component members and compared the data from pre- deployment assessments with the data from post-deployment assessments. We also analyzed the month-by-month flow of forms to the AMSA to see if the services had been submitting the forms as required. We compared elapsed times between pre- and post-deployment assessments. We conducted cross tabulations of the data to identify relationships between various variables such as the overall health status, deployability, and referral variables. All of our analyses compared data across the reserve components to look for differences or trends. We assessed the reliability of reserve component mobilization, demobilization, and general usage data supplied by OASD/RA by (1) reviewing existing information about the data and the systems that produced them and (2) interviewing agency officials knowledgeable about the data. We also compared the data with data supplied to us by the services. Our assessment of the AMSA data was even more rigorous and included the electronic testing of relevant data elements, and discussions with knowledgeable officials about not only the procedures for collecting the data but also the procedures for coding the data. As a result of our assessments, we determined that the data were sufficiently reliable for the purposes of this report. We conducted our review from November 2003 through July 2004 in accordance with generally accepted government auditing standards. Tables 7 and 8 show information about the Ready Reserve and its subcategories. Table 7 shows that the strength of the Ready Reserve has declined steadily from fiscal year 1993 to fiscal year 2003, but the strength of the Selected Reserve remained fairly steady from fiscal year 1998 to fiscal year 2003 after declining by more than 170,000 personnel from fiscal year 1993 to fiscal year 1998. The Selected Reserve is the portion of the Ready Reserve that participates in regular training. Table 8 shows the relative sizes of the reserve components at the end of fiscal year 2003. The Army’s reserve components are larger than those of the other services and are expected to remain so for the foreseeable future. Fort Carson, Colorado. Fort Benning, Georgia. Fort Stewart, Georgia. Fort Riley, Kansas. Fort Campbell, Kentucky. Fort Polk, Louisiana. Fort Bragg, North Carolina. Fort Dix, New Jersey. Fort Drum, New York. Fort Sill, Oklahoma. Fort Bliss, Texas. Fort Hood, Texas. Fort Eustis, Virginia. Fort Lewis, Washington. Fort McCoy, Wisconsin. Fort Rucker, Alabama. Fort Huachuca, Arizona. Camp Roberts, California. Gowen Field, Idaho. Camp Atterbury, Indiana. Fort Knox, Kentucky. Aberdeen Proving Ground, Maryland. Camp Shelby, Mississippi. Fort Leonard Wood, Missouri. Fort Buchanan, Puerto Rico. Fort Jackson, South Carolina. Fort Lee, Virginia. Navy Mobilization Processing Site New London, Connecticut. Navy Mobilization Processing Site Seattle, Washington. Navy Mobilization Processing Site Gulfport, Mississippi. Navy Mobilization Processing Site Jacksonville, Florida. Navy Mobilization Processing Site Norfolk, Virginia. Navy Mobilization Processing Site Pensacola, Florida. Navy Mobilization Processing Site Port Hueneme, California. Navy Mobilization Processing Site Washington, D.C. Navy Mobilization Processing Site Memphis, Tennessee. Navy Mobilization Processing Site London, United Kingdom. Navy Mobilization Processing Site Pearl Harbor, Hawaii. Navy Mobilization Processing Site San Diego, California. Navy Mobilization Processing Site Great Lakes, Illinois. Navy Mobilization Processing Site Camp Lejeune, North Carolina. Navy Mobilization Processing Site Camp Pendleton, California. Camp Pendleton, California (Used to mobilize and demobilize units and individuals for worldwide usage). Camp Lejeune, North Carolina (Used to mobilize and demobilize units and individuals for worldwide usage). Marine Corps Base Quantico, Virginia (Primarily used to mobilize and demobilize individual reservists for duty in the Washington, D.C. Metro area). Marine Corps Air Station Miramar, California. Marine Corps Air Station Cherry Point, North Carolina. Maxwell Air Force Base, Alabama. Little Rock Air Force Base, Arkansas. Davis-Monthan Air Force Base, Arizona. Luke Air Force Base, Arizona. Beale Air Force Base, California. March Air Reserve Base, California. Travis Air Force Base, California. Vandenberg Air Force Base, California. Peterson Air Force Base, Colorado. Schriever Air Force Base, Colorado. Dover Air Force Base, Delaware. Eglin Air Force Base, Florida. Homestead Air Reserve Base, Florida. MacDill Air Force Base, Florida. Patrick Air Force Base, Florida. Dobbins Air Reserve Base, Georgia. Robins Air Force Base, Georgia. Andersen Air Force Base, Guam. Scott Air Force Base, Illinois. Grissom Air Reserve Base, Indiana. McConnell Air Force Base, Kansas. Barksdale Air Force Base, Louisiana. New Orleans Air Reserve Station, Louisiana. Hanscom Air Force Base, Massachusetts. Westover Air Reserve Base, Massachusetts. Andrews Air Force Base, Maryland. Selfridge Air National Guard Base, Michigan. Minneapolis-Saint Paul International Airport Air Reserve Station, Minnesota. Whiteman Air Force Base, Missouri. Columbus Air Force Base, Mississippi. Keesler Air Force Base, Mississippi. Pope Air Force Base, North Carolina. Seymour Johnson Air Force Base, North Carolina. Offutt Air Force Base, Nebraska. McGuire Air Force Base, New Jersey. Kirtland Air Force Base, New Mexico. Fort Hamilton, New York. Niagara Falls International Airport Air Reserve Station, New York. Wright Patterson Air Force Base, Ohio. Youngstown Air Reserve Station, Ohio. Tinker Air Force Base, Oklahoma. Portland International Airport, Oregon. Pittsburgh International Airport Air Reserve Station, Pennsylvania. Willow Grove Air Reserve Station, Pennsylvania. Charleston Air Force Base, South Carolina. Shaw Air Force Base, South Carolina. Brooks Air Force Base, Texas. Fort Worth Naval Air Station Joint Reserve Base, Texas. Lackland Air Force Base, Texas. Laughlin Air Force Base, Texas. Randolph Air Force Base, Texas. Hill Air Force Base, Utah. Langley Air Force Base, Virginia. Norfolk Naval Air Station, Virginia. Fairchild Air Force Base, Washington. McChord Air Force Base, Washington. General Mitchell Air Reserve Base, Wisconsin. Eielson Air Force Base, Alaska. Kulis Air National Guard Base, Alaska. Birmingham International Airport, Alabama. Montgomery Regional Airport, Alabama. Fort Smith Regional Airport, Arkansas. Little Rock Air Force Base, Arkansas. Phoenix Sky Harbor International Airport, Arizona. Tucson International Airport, Arizona. Channel Islands Air National Guard Station, California. Fresno Air Terminal, California. March Air Reserve Base, California. Moffett Federal Airfield, California. Buckley Air Force Base, Colorado. Bradley Air National Guard Base, Connecticut. New Castle County Airport, Delaware. Jacksonville International Airport, Florida. Robins Air Force Base, Georgia. Savannah International Airport, Georgia. Andersen Air Force Base, Guam. Hickam Air Force Base, Hawaii. Des Moines International Airport, Iowa. Sioux City Airport, Iowa. Gowen Field, Idaho. Greater Peoria Airport, Illinois. Scott Air Force Base, Illinois. Springfield Capital Airport, Illinois. Fort Wayne International Airport, Indiana. Terre Haute International Airport, Indiana. Forbes Field, Kansas. McConnel Air Force Base, Kansas. Standiford Field, Kentucky. New Orleans Naval Air Station, Louisiana. Barnes Air National Guard Base, Massachusetts. Otis Air National Guard Base, Massachusetts. Andrews Air Force Base, Maryland. Martin State Airport, Maryland. Bangor International Airport, Maine. Selfridge Air National Guard Base, Michigan. W.K. Kellog Airport, Michigan. Duluth Air National Guard International Airport, Minnesota. Minneapolis-Saint Paul International Airport, Minnesota. Lambert-Saint Louis International Airport, Missouri. Rosecrans Memorial Airport, Missouri. Jackson International Airport, Mississippi. Key Field, Mississippi. Great Falls International Airport, Montana. Charlotte-Douglas International Airport, North Carolina. Hector International Airport, North Dakota. Lincoln Municipal Airport, Nebraska. Pease Air National Guard Base, New Hampshire. Atlantic City Municipal Airport, New Jersey. McGuire Air Force Base, New Jersey. Kirtland Air Force Base, New Mexico. Reno Cannon International Airport, Nevada. F.S. Gabreski Airport, New York. Hancock Field, New York. Niagara Falls International Airport, New York. Stewart Air National Guard Base, New York. Stratton Air National Guard Base, New York. Mansfield Lahm Airport, Ohio. Rickenbacker Air National Guard Base, Ohio. Springfield-Beckley Municipal Airport, Ohio. Toledo Express Airport, Ohio. Tulsa International Airport, Oklahoma. Will Rogers Air National Guard Base, Oklahoma. Klamath Falls International Airport, Oregon. Portland International Airport, Oregon. Harrisburg International Airport, Pennsylvania. Pittsburgh International Airport, Pennsylvania. Willow Grove Air Reserve Station, Pennsylvania. Luis Munoz Marin International Airport, Puerto Rico. Quonset State Airport, Rhode Island. McEntire Air National Guard Station, South Carolina. Joe Foss Field, South Dakota. McGhee Tyson Air National Guard Base, Tennessee. Memphis International Airport, Tennessee. Nashville International Airport, Tennessee. Ellington Field, Texas. Fort Worth Naval Air Station Joint Reserve Base, Texas. Kelly Air Force Base, Texas. Salt Lake City International Airport, Utah. Richmond International Airport, Virginia. Burlington International Airport, Vermont. Camp Murray, Washington. Fairchild Air Force Base, Washington. General B. Mitchell Air National Guard Base, Wisconsin. Truax Field, Wisconsin. Eastern West Virginia Regional Airport, West Virginia. Yeager Air National Guard Airport, West Virginia. Cheyenne Air National Guard, Wyoming. On September 14, 2001, the Secretary of Defense delegated his stop-loss authority to the service secretaries. This authority allows the services to retain both active and reserve component members on active duty beyond the end of their obligated service. Reserve component members who are affected by the order generally cannot retire or leave the service until authorized by competent authority. Each of the services has exercised its stop-loss authority on different occasions and for different military occupational specialties. The Army issued a stop-loss message on December 4, 2001, imposing stop- loss on several active component skill-based specialties. As the needs of the Army changed, the number of occupational specialties expanded and then contracted, and included the reserve components as well as the Army’s active forces. The Army ended its specialty-based stop-loss on November 13, 2003. The Army’s current stop-loss policy, which affects active and reserve component forces, is unit-based rather than occupational specialty driven. Significant stop-loss policy changes that affected the Army’s reserve component forces are listed below. January 2002. The stop-loss policy already in effect for the active component is expanded to include soldiers in the Ready Reserve. Soldiers with 23 different occupational specialties, including special forces, civil affairs, psychological operations, certain aviation categories, mortuary affairs, and maintenance are affected. February 2002. The Army expands its stop-loss policy for the active and reserve components, adding 38 occupational specialties to the stop-loss program. The new categories include military police, military intelligence specialties and technicians, comptrollers, foreign area officers (Eurasia, Middle East/North Africa), contract and industrial management, additional aviator specialties, criminal investigators, and linguists. June 2002. The Army expands and retracts its stop-loss policy for the active and reserve components. New occupational specialties affected include information operations, strategic intelligence, various field artillery and air defense specialties, explosive ordnance disposal, and unmanned aerial vehicle operators. Soldiers in the foreign area officer (Eurasia) and select intelligence specialties were released from the stop-loss policy. November 2002. Army ends skill-based stop-loss policy for the Ready Reserve and Guard forces. The new stop-loss policy is unit based, beginning when the unit is alerted until 90 days after the end of the unit’s mobilization. February 2003. Army expands stop-loss to include active component units identified for deployment in support of Operation Iraqi Freedom. November 2003. Army again issues unit stop-loss for active forces, and cancels occupational specialty stop losses that had been issued since February 2003. (There were several stop-loss changes issued between February 2003 and November 2003 but these changes were focused on active forces.) The unit stop-loss policies for reserve component forces have remained continuously in effect since they were instituted in 2002. The Navy exercised its stop-loss authority on September 28, 2001, by imposing stop-loss on several occupational specialties. Unlike the Army, the Navy’s initial stop-loss policy affected both active and reserve component forces. The Navy’s significant stop-loss policy changes are listed below. September 2001. The Navy issues a stop-loss policy for a variety of officer and enlisted occupational specialties, and subspecialties to include personnel in special operations/special warfare, security, law enforcement, cryptology, and explosive ordnance disposal as well as selected physicians, nurses, and linguists. March 2002. The Navy modifies its existing stop-loss policy, adding new specialties and removing others. After the changes, selected linguists and personnel in security, law enforcement, and cryptology were subject to the stop-loss restriction. August 2002. The Navy ends its stop-loss policy. The Air Force exercises its stop-loss authority on September 22, 2001, by imposing a servicewide stop-loss on all Air Force personnel. Unlike the Army, the Air Force’s initial policy affected active, reserve, and Air National Guard members. The Air Force’s significant stop-loss policy changes are listed below. September 2001. The Air Force implements a servicewide, stop-loss policy. January 2002. The Air Force releases 64 occupational specialties from the general stop-loss. Specialties that still fall under the limitations of the stop-loss policy include selected pilots, navigators, intelligence specialists, weather specialists, security personnel, engineers, communications specialists, selected health care providers, lawyers, chaplains, aircrew operators, aircrew protection personnel, command and control specialists, fuel handlers, logisticians and supply specialists, selected maintenance providers, and investigators. June 2002. The Air Force exempts additional occupational specialties from the general stop-loss. Specialties that remain under the limitations of the stop-loss policy include selected pilots, navigators, security personnel, aircrew operators, command and control specialists, intelligence specialists, aircrew protection, and fuel handlers. March 2003. The Air Force announces that effective May 2, 2003, stop- loss will be expanded to cover a total of 99 occupational specialties. Specialties that are affected by the stop-loss policy include selected pilots, navigators, command and control specialists, intelligence specialists, security personnel, engineers, selected health care providers, investigators, aircrew operators, aircrew protection personnel, communications specialists, logisticians and supply specialists, and fuel handlers. May 2003. The Air Force modifies its stop-loss policy, releasing about half of the previously selected occupational specialties. The list of specialties still affected by the stop-loss includes selected pilots, navigators, intelligence specialists, security forces, special investigators, aircrew operators, fuel handlers, and maintenance personnel. June 2003. The Air Force ends its stop-loss policy. The Marine Corps exercised its stop-loss authority for selective active and reserve Marines in January 2002. Specific policies varied as to their applicability to active and reserve forces; however, expansion of stop-loss policy eventually covered all Marines. The Marine Corps’ significant stop- loss policy changes are listed below. January 2002. The Marine Corps implements a specific stop-loss authority for Marines with C-130 specialties to assist in Operation Enduring Freedom. This stop-loss authority includes Marines in the reserve component. January 2003. The Marine Corps implements a general stop-loss policy for all Marines, regardless of component. Marine Corps reservists cannot be extended beyond the completion of 24 cumulative months of activated service. Furthermore, the first general officer in a Marine’s chain of command can exempt Marines from the stop-loss policy. May 2003. The Marine Corps lifts its stop-loss policy. The services use recruiting and retention strategies together to achieve their programmed end strengths. If retention is better than expected in a particular year, then the reserve components may achieve their desired end strengths without achieving their recruiting goals. While the services can effectively meet their yearly programmed end strengths through a wide range of recruiting and retention combinations, long-term overreliance on either recruiting or retention can eventually cause negative impacts for a service or service component. A service or component that repeatedly misses its recruiting goals will need to retain a higher-than-planned percentage of its personnel each year. This will eventually lead to a force that is out of balance. Either too many people will be promoted and the component will end up with too many senior personnel and not enough junior personnel or promotion rates will decline. Decreased promotion rates tend to lead to increased attrition rates, which would lead to end strength problems if a component were already having problems meeting its recruiting goals. Appendix VI showed that the services have employed a variety of stop-loss policies since September 11, 2001. Because these policies artificially inflate retention rates, recruiting figures rather than retention or end strength figures may be the best indicator of whether or not the components will face difficulties meeting their future programmed end strengths. Table 10 shows historical recruiting results. It shows that all the reserve components met their recruiting goals in fiscal year 2002. But it shows that the Army National Guard fell far short of its goal in fiscal year 2003 and was falling far short of its fiscal year 2004 monthly goals through May of 2004. This dramatic drop in recruiting results occurred as the Army was significantly increasing its involuntary mobilizations of Army National Guard combat forces. The improving job market in the United States may make it even more difficult for the Army National Guard to achieve its recruiting objectives over the next few years. DOD’s Physical Disabilities Evaluation System consists of four main elements: 1. medical evaluation by Medical Evaluation Boards (MEBs), 2. physical disability evaluation by Physical Evaluation Boards (PEBs) to 3. servicemember counseling, and 4. final disposition by appropriate personnel authorities. Figure 2 shows the steps of the disabilities evaluation system, which will eventually lead to one of two outcomes. Servicemembers will either be returned to duty or they will be discharged from their military service. Members who are discharged sometimes, but not always, receive disability compensation. Continuing to receive care throughout process Reserve component personnel who have been involuntarily mobilized, along with members who are voluntarily serving on active duty, may end up with medical problems for a variety or reasons. Some are injured during combat operations; others become injured or sick during the course of their training or routine duties; and others have problems that are identified during medical appointments, physicals, or medical screenings. Servicemembers on active duty or in the Ready Reserve are eligible for referral into the Disability Evaluation System when they are unable to reasonably perform the military duties of their office, grade, rank, or rating as a result of a diagnosed medical condition. Servicemembers who have been diagnosed with medical conditions that may render them unfit for military service enter into medical treatment programs. The initial stage of the process, when medical professionals are diagnosising servicemembers’ problems, determining courses of treatment, and evaluating the effectiveness of the ongoing treatments is often the most time-consuming portion of the medical process. According to service officials, this initial phase is intentionally long to give servicemembers a good chance to get well and return to full duty. If, however, the servicemembers have not returned to full duty within 1 year of their diagnoses or if prior to a year they reach a point where they have achieved the maximum recovery expected, and additional treatment is not expected to materially affect their condition, their medical status and duty limitations will be documented and referred to a MEB. The MEB documents full clinical information on all medical conditions and states whether each condition is cause for referral into the Disability Evaluation System. The duty-related impairment MEB package should include a medical history; records from physical examinations; records of medical tests and their results; and documentation of medical and surgical consultations, diagnoses, treatments and prognoses. If the servicemember meets retention standards, the disability processing ends with the MEB. If the MEB concludes that the servicemembers do not meet retention standards, the members’ cases are referred to the PEB to determine fitness for duty and possible entitlement to benefits. The first step in the PEB process is referral of the cases to informal PEBs that review documents from the MEB and other administrative documents without the presence of the servicemember. The informal PEB then issues its initial findings and recommendations. If servicemembers are found to be fit for duty, the disability processing ends with the informal PEB. If servicemembers are found to be unfit for duty, they may request to personally appear before the PEB during formal PEB hearings. Servicemembers who do not agree with the decisions of the Formal PEB have an additional opportunity to appeal the decisions. When a physician initiates an MEB, the processing time should normally not exceed 30 days from the date the MEB report is initiated to the date it is received by the PEB. For cases where reserve component members are referred for solely a fitness determination on a non-duty-related condition, processing time for conducting an MEB or physical examination should not exceed 90 days. And when the PEB receives the MEB or physical examination report, the processing time to the date of the final disposition of the reviewing authority should normally be no more than 40 days. All servicemembers who enter the Disability Evaluation System receive counseling. Counselors inform the servicemembers of the sequence and nature of the steps in the process, statutory and regulatory rights, the effects of findings and recommendations, and the servicemember’s recourse in the case of an unfavorable finding. It is not within the mission of the military departments to retain members on active duty or in the Ready Reserve to provide prolonged, definitive medical care when it is unlikely the member will return to full military duty. Servicemembers should be referred into the Disability Evaluation System as soon as the probability that they will be unable to return to full duty is ascertained and optimal medical treatment benefits have been reached. In addition to the individual named above Kenneth F. Daniell, Michael J. Ferren, Christopher R. Forys, Jim Melton, Kenneth E. Patton, Gary W. Phillips, Jennifer R. Popovic, Sharon L. Reid, Irene A. Robertson, Nicole Volchko, and Robert K. Wild also made significant contributions to the report. Military Pay: Army Reserve Soldiers Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-990T. Washington, D.C.: July 20, 2004. Reserve Forces: Observations on Recent National Guard Use in Overseas and Homeland Missions and Future Challenges. GAO-04-670T. Washington, D.C.: April 29, 2004. Defense Infrastructure: Long-term Challenges in Managing the Military Construction Program. GAO-04-288. Washington, D.C.: February 24, 2004. Military Pay: Army National Guard Personnel Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-413T. Washington, D.C.: January 28, 2004. Military Pay: Army National Guard Personnel Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-89. Washington, D.C.: November 13, 2003. Defense Health Care: Quality Assurance Process Needed to Improve Force Health Protection and Surveillance. GAO-03-1041. Washington, D.C.: September 19, 2003. Military Personnel: DOD Needs More Data to Address Financial and Health Care Issues Affecting Reservists. GAO-03-1004. Washington, D.C.: September 10, 2003. Military Personnel: DOD Actions Needed to Improve the Efficiency of Mobilizations for Reserve Forces. GAO-03-921. Washington, D.C.: August 21, 2003. Homeland Defense: DOD Needs to Assess the Structure of U.S. Forces for Domestic Military Missions. GAO-03-670. Washington, D.C.: July 11, 2003. Defense Health Care: Army Has Not Consistently Assessed the Health Status of Early-Deploying Reservists. GAO-03-997T. Washington, D.C.: July 9, 2003. Defense Infrastructure: Changes in Funding Priorities and Management Processes Needed to Improve Condition and Reduce Costs of Guard and Reserve Facilities. GAO-03-516. Washington, D.C.: May 15, 2003. Homeland Defense: Preliminary Observations on How Overseas and Domestic Missions Impact DOD Forces. GAO-03-677T. Washington, D.C.: April 29, 2003. Defense Health Care: Army Needs to Assess the Health Status of All Early-Deploying Reservists. GAO-03-437. Washington, D.C.: April 15, 2003. Military Treatment Facilities: Eligibility Follow-up at Wilford Hall Air Force Medical Center. GAO-03-402R. Washington, D.C.: April 4, 2003. Military Personnel: Preliminary Observations Related to Income, Benefits, and Employer Support for Reservists during Mobilizations. GAO-03-549T. Washington, D.C.: March 19, 2003. Military Personnel: Preliminary Observations Related to Income, Benefits, and Employer Support for Reservists during Mobilizations. GAO-03-573T. Washington, D.C.: March 19, 2003. Defense Health Care: Most Reservists Have Civilian Health Coverage but More Assistance Is Needed When TRICARE Is Used. GAO-02-829. Washington, D.C.: September 6, 2002. Reserve Forces: DOD Actions Needed to Better Manage Relations between Reservists and Their Employers. GAO-02-608. Washington, D.C.: June 13, 2002. Wartime Medical Care: DOD Is Addressing Capability Shortfalls, but Challenges Remain. GAO/NSIAD-96-224. Washington, D.C.: September 25, 1996. Reserve Forces: DOD Policies Do Not Ensure That Personnel Meet Medical and Physical Fitness Standards. GAO/NSIAD-94-36. Washington, D.C.: March 23, 1994. Defense Health Care: Physical Exams and Dental Care Following the Persian Gulf War. GAO/HRD-93-5. Washington, D.C.: October 15, 1992. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
|
Over 335,000 reserve members have been involuntarily called to active duty since September 11, 2001, and the Department of Defense (DOD) expects future reserve usage to remain high. This report is the second in response to a request for GAO to review DOD's mobilization and demobilization process. This review specifically examined the extent to which (1) DOD's implementation of a key mobilization authority and personnel polices affect reserve force availability, (2) the Army was able to execute its mobilization and demobilization plans efficiently, and (3) DOD can manage the health of its mobilized reserve forces. DOD's implementation of a key mobilization authority to involuntarily call up reserve component members and personnel policies greatly affects the numbers of reserve members available to fill requirements. Involuntary mobilizations are currently limited to a cumulative total of 24 months under DOD's implementation of the partial mobilization authority. Faced with some critical shortages, DOD changed a number of its personnel policies to increase force availability. However, these changes addressed immediate needs and did not take place within a strategic framework that linked human capital goals with DOD's organizational goals to fight the Global War on Terrorism. DOD was also considering a change in its implementation of the partial mobilization authority that would have expanded its pool of available personnel. This policy revision would have authorized mobilizations of up to 24 consecutive months without limiting the number of times personnel could be mobilized, and thus provide an essentially unlimited flow of forces. In commenting on a draft of this report, DOD stated that it would retain its current cumulative approach, but DOD did not elaborate in its comments on how it expected to address its increased personnel requirements. The Army was not able to efficiently execute its mobilization and demobilization plans, because the plans contained outdated assumptions concerning the availability of facilities and support personnel. For example, plans assumed that active forces would be deployed abroad, thus vacating facilities when reserves were mobilizing and demobilizing but reserve forces were used earlier and active forces had often not vacated the facilities. As a result, some units were diverted away from their planned mobilization sites, and disparities in housing accommodations existed between active and reserve forces. Efficiency was also lost when short notice hampered coordination efforts among planners, support personnel, and mobilizing or demobilizing reserve forces. To address shortages in housing and other facilities, the Army has embarked on several construction and renovation projects without updating its planning assumptions regarding the availability of facilities. As a result, the Army risks spending money inefficiently on projects that may not be located where the need is greatest. Further, the Army has not taken a coordinated approach evaluating all the support costs associated with mobilization and demobilization at alternative sites in order to determine the most efficient options for the Global War on Terrorism. DOD's ability to effectively manage the health status of its reserve forces is limited because its centralized database has missing and incomplete health records and it has not maintained full visibility over reserve component members with medical problems. For example, the Marine Corps did not send pre-deployment health assessments to DOD's database as required, due to unclear guidance and a lack of compliance monitoring. The Air Force has visibility of involuntarily mobilized members with health problems, but lacks visibility of members with health problems who are on voluntary orders. As a result, some personnel had medical problems that had not been resolved for up to 18 months, but the full extent of this situation is unknown.
|
To determine the extent to which USCIS, DOS, and DOJ implemented the requirements of IMBRA, we reviewed the Act, its legislative history, relevant provisions of the INA, and related legislation, the Adam Walsh Child Protection and Safety Act of 2006 which contains provisions that may affect a petitioner’s ability to have beneficiaries immigrate. For purposes of this review, we summarized and grouped IMBRA’s requirements into seven key areas and identified the actions taken and the actions remaining in each of the key areas. To determine what actions had and had not been taken, we obtained and analyzed pertinent documentation, such as USCIS’s and DOS’s IMBRA implementation guidance for its adjudicators and consular officers, respectively. We also interviewed cognizant officials at USCIS headquarters and at its California and Vermont Service Centers—the two service centers responsible for processing petitions for alien fiancé(e)s; at DOS, and at DOJ. We interviewed three DOS Consular Affairs Unit Chiefs by telephone and received written responses to our questions from four other Unit Chiefs all of whom were responsible for processing immigrant visa applications, including fiancé(e) visas. These Unit Chiefs were at seven consular posts located in Bangkok, Thailand; Bogotá, Colombia; Ciudad Juarez, Mexico; Guangzhou, China; Ho Chi Minh City, Vietnam; Manila, Philippines; and Moscow, Russia. We chose these posts because they issued about two- thirds of alien fiancé(e) visas in fiscal year 2007. While information we obtained at these locations may not be generalized across all consular posts, because we selected the posts based on the volume of activity, we believe the posts provided us with a general overview and perspective on the implementation of IMBRA at the selected posts. To determine whether USCIS and DOS have collected and maintained data for our report as required by IMBRA, we obtained and analyzed available data for fiscal years 2006 and 2007 on fiancé(e) petitions filed from USCIS’s application management system, and on visas issued to fiancé(e)s from DOS’s visa database. To determine the reliability of data in USCIS’s and DOS’s database, we observed how petitioner data are entered into USCIS’s data system, reviewed pertinent USCIS and DOS documents, and interviewed relevant USCIS and DOS officials. We determined that the data we used from USCIS and DOS databases were sufficiently reliable for purposes of this report. In addition, we interviewed cognizant officials at DHS/USCIS in headquarters, Washington, D.C.; at the California and Vermont Service Centers, and at DOS. We conducted this performance audit from October 2007 to August 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. USCIS is the agency responsible for reviewing and making decisions on whether to approve or deny immigration benefit applications, including petitions filed by U.S. citizens requesting to bring a noncitizen fiancé(e), spouse, minor child, or other noncitizen relative to live permanently in the United States in accordance with the INA. Only U.S. citizens can request or petition for noncitizen fiancé(e)s to immigrate to the United States. To do so, citizens must file a form I-129F, Petition for Alien Fiancé(e), with USCIS to enable their fiancé(e) to come to the United States as a K-1 nonimmigrant and then apply for permanent residence. U.S. citizens who have already married noncitizens living abroad may also file Form I-129F to enable their spouse to come to the United States on a K-3 nonimmigrant visa and then apply for permanent residency. The purpose of the form I- 129F is to establish the relationship of the petitioner to the beneficiary who wishes to immigrate to the United States. For K-1 fiancé(e) petitions, the petitioner must establish that the petitioner and fiancé(e) are free to marry and that they have previously met in person within 2 years of filing the petition. For K-3 spouse petitions, the petitioner must establish the bona fide marital relationship to the beneficiary. Two USCIS Service Centers, one in California and another in Vermont, process all I-129F petitions. As part of its I-129F petition review, USCIS obtains criminal history information for specified crimes from the petitioner and conducts background checks on both the beneficiary and petitioner. If the petitioner has ever been convicted of any of the specified crimes, the petitioner is to provide to USCIS certified copies of court and police records showing the charges and dispositions for every such conviction. USCIS conducts name-based background security checks—checks of a petitioner’s name and date of birth—-against the Interagency Border Inspection System (IBIS). According to USCIS, IBIS queries include a check of five National Crime Information Center (NCIC) data files, which include the Convicted Sexual Offender Registry; the Foreign Fugitive; Immigration Violator; Violent Gang and Terrorist Organization; and Wanted Persons files. During a background security check, if an IBIS query returns a “hit” where the name and date of birth information entered returns a response from one or more of the databases and it appears the petitioner may have a criminal background, USCIS adjudicators forward this information to the Background Check Unit located within the Service Center. This unit conducts further system searches for verification of the criminal histories. After researching and summarizing the criminal information on the petitioner, if there is no national security or criminality issue requiring further investigation, unit staff notate their findings in a memorandum, which they send back to the adjudicator responsible for the file. While IMBRA requires USCIS to collect petitioner criminal history information, we reported in 2006 that USCIS does not have general authority to deny a family-based petition solely on grounds that the petitioner has a criminal sexual history. Since we issued our report, Congress enacted legislation limiting a petitioner’s ability to immigrate relatives, including fiancé(e)s. The Adam Walsh Child Protection and Safety Act of 2006 prohibits DHS from approving any family-based petition, including a fiancé(e) petition, for any petitioner convicted of a specified offense against a minor unless the DHS Secretary, in his sole and unreviewable discretion, determines that the petitioner poses no risk to the beneficiary. DOS is responsible for processing all approved visa petitions received from USCIS and for determining whether to issue a visa to beneficiaries who are overseas, such as fiancé(e)s and K-3 spouses. Consular officers are responsible for reviewing (but not re-adjudicating) approved petitions and supporting documentation, and determining whether the noncitizen beneficiary meets admissibility requirements and can be issued a nonimmigrant visa to enter the United States. As part of this review, consular officers interview beneficiaries and ask questions about how the beneficiary and petitioner met. Figure 1 shows the key steps in the I-129F petition filing and visa application process, including filing a petition with USCIS, conducting criminal background checks, USCIS adjudication, and visa applicant interview by DOS. IMBRA also requires IMBs to search the name of the U.S. client (potential petitioner) against the National Sex Offender Public Registry or relevant state sex offender public registries, and to collect background information from the U.S. client, including the client’s marital history and arrest and conviction information for specified crimes, and to provide this information to the foreign client (potential beneficiary). IMBs may not release the foreign client’s personal contact information to the U.S. client until the required disclosures have been made and the foreign client provides signed, written consent authorizing the IMB to release personal contact information to the U.S. client. IMBs are also prohibited from engaging in certain activities such as (1) providing personal contact information of a foreign client to anyone other than the U.S. client and (2) providing anyone with personal contact information or other information about individuals under the age of 18. IMBRA establishes federal criminal and civil penalties for IMBs that violate these provisions. IMBRA-related criminal cases are to be prosecuted by the U.S. Attorney’s offices, possibly in coordination with the Civil Rights Division or the Child Exploitation and Obscenity section of the Criminal Division depending on the circumstances of the case. With respect to IMBRA-related civil cases, the Attorney General has statutory responsibility for imposing civil penalties after notice and the opportunity for an agency hearing. Within DOJ, its Executive Office for Immigration Review (EOIR) is responsible for interpreting and administering federal immigration laws and rendering adjudicatory decisions on specific immigration cases. Within EOIR, the Office of the Chief Administrative Hearing Officer is responsible for hearing civil cases related to administrative fines that may be imposed under the INA. For a summary of selected IMBRA requirements for USCIS, DOS, DOJ, and IMBs see, Appendix I. USCIS, DOS, and DOJ have implemented some, but not all of the IMBRA requirements that are designed to inform visa applicants about the persons petitioning for them to immigrate to the United States. For example, USCIS is collecting criminal background information from petitioners and forwarding this information to DOS. DOS is in turn disclosing this information to beneficiaries when it interviews beneficiaries during the visa process. However, USCIS has yet to finalize the information pamphlet required by IMBRA that is to contain information about the visa process and about resources that are available should any beneficiary become a victim of domestic violence. Until the pamphlet is finalized, DOS cannot translate or distribute the pamphlet to beneficiaries as required by IMBRA. Further, IMBRA assigns responsibility to DOJ for hearing cases and imposing civil penalties against IMBs in violation of its provisions. However, DOJ, DOS, and USCIS have outstanding enforcement issues to resolve, such as which agencies will investigate, refer, and prosecute cases. Although DOJ has drafted IMBRA-related hearing regulations, DOJ states that these regulations cannot be finalized until a framework for the investigation, referral, and prosecution of cases is agreed upon by USCIS, DOS, and DOJ. Table 1 below summarizes selected IMBRA statutory requirements and actions taken by USCIS, DOS and DOJ to implement those requirements as of July 2008. To comply with IMBRA’s petitioner criminal background requirements, USCIS revised the I-129F, Petition for Alien Fiancé(e) instructions, to request that the petitioner provide certified criminal conviction information to USCIS. The petition instructions ask whether the petitioner has ever been convicted of any crimes as specified on the form and, if so, directs the petitioner to provide certified conviction information. USCIS does not rely solely on the petitioner to acknowledge if he or she has a criminal history. As part of the adjudication process, USCIS conducts security checks on all petitioners. If relevant criminal history—such as information on domestic violence, sexual assault, child abuse, or homicide that was not otherwise disclosed by the petitioner—is uncovered during the security check, USCIS is to request the petitioner to submit certified criminal conviction information before continuing with the adjudication of the case. If the petitioner does not respond to the request to provide additional information, the case is to be denied for failure to respond. IMBRA requires that USCIS forward the petitioner’s criminal background information to DOS. DOS in turn, is required to mail the petition, including any criminal background information, to the beneficiary at the same time as it mails the visa instruction packet. USCIS issued IMBRA implementation guidance in July 2006 to its adjudicators directing them to include the criminal background information in the file sent to DOS. USCIS officials from both the California and Vermont Service Centers told us that criminal background information is being forwarded to DOS for disclosure to the beneficiary and as discussed later in this report DOS consular officers are receiving this information. DOS also issued IMBRA implementation guidance to its consular officers. DOS’s guidance states that consular officers are to disclose criminal history information to beneficiaries on two separate occasions: (1) when the visa application instructions are mailed to the beneficiary and (2) during the consular interview (which we will discuss later in this report). Although DOS’s guidance does provide for the mailing of criminal history information to beneficiaries, the guidance does not specifically require DOS to mail beneficiaries a copy of the approved I-129F petitions, as required by IMBRA. The I-129F petitions require petitioners to disclose additional information beyond criminal background information, such as their marital status and the number of prior I-129F petitions filed, approved or denied. Thus, not mailing the petitions themselves could prevent beneficiaries from obtaining relevant non-criminal information about their petitioners that could affect their immigration decision. During the course of this review, DOS informed us that it will revise its guidance to require the I-129F petition itself to be mailed to beneficiaries when it mails the visa instruction packet. To track petitioners who have filed and had two or more fiancé(e) or spousal petitions approved, IMBRA requires that USCIS create a multiple visa petition tracking database. USCIS officials told us that they have not created a separate database to track multiple visa petition filers. According to USCIS officials, they have addressed the requirement to develop a multiple visa petition tracking database by modifying its application management system, called CLAIMS, to enable adjudicators to notate and track the specific number of fiancé(e) and spousal petitions filed and approved for a particular petitioner. USCIS adjudicators use this information to determine if IMBRA-established filing limits have been exceeded. If exceeded, adjudicators are to check to see if the petitioner requested a waiver of the filing limits. The form I-129F instructions inform the petitioner of the filing limits and the need to request a waiver if the filing limits have been exceeded. IMBRA limits the number of fiancé(e) petitions a person may file unless the petitioner requests and is granted a waiver of the filing limits by USCIS. USCIS officials told us that they do not check every petitioner against USCIS’s application management information system, called CLAIMS, to determine if the petitioner has previously filed a fiancé(e) petition. The I-129F, Petition for Alien Fiancé(e) form asks whether the petitioner has ever previously filed for this or any other noncitizen fiancé(e) or husband/wife. According to USCIS officials, if the petitioner answers “yes,” adjudicators are to initiate a check against CLAIMS to determine whether the petitioner had prior filings. If the petitioner answers “no,” adjudicators are not required to initiate a check for multiple petition filings. According to an adjudication officer, although USCIS procedures do not require adjudicators to check every petitioner against CLAIMS, adjudicators may initiate a check if other case information alerts the adjudicator that the petitioner may have previously filed. USCIS officials told us that checking petitioners’ names against CLAIMS may result in multiple “hits” on people with names similar to or the same as the petitioner’s. According to USCIS, checking every name against CLAIMS would require a significant amount of review and research to try and determine if the petitioner is a match to any of those already in CLAIMS. However, USCIS officials did acknowledge that a more refined search using the petitioner’s name and another piece of identifying information, such as the petitioner’s date of birth, could reduce the multiple hits and therefore reduce the number of petitioners who would require additional research. As shown in Table 1, USCIS adjudicators are to routinely check CLAIMS if the petitioner self-attests that he or she has previously filed a petition. Nevertheless, relying on petitioners’ self-attestation to identify previous filers may not always provide accurate results, regardless of whether petitioners must certify the truthfulness of their form I-129F attestations under penalty of perjury. For example, in March 2006, we reported that evidence suggested that immigration benefit fraud was an ongoing and serious problem, although the full extent was not known. As a result, by limiting USCIS checks to those petitioners that acknowledge prior filings, USCIS increases its risk that it will approve more fiancé(e) petitions than allowed under IMBRA, including petitions filed by persons with a record of violent criminal offenses, who are not entitled to a waiver of IMBRA’s filing limits except in extraordinary circumstances. USCIS’s reliance on self-attestation also increases the risk that USCIS will not have accurate multiple filer information to disclose to prospective beneficiaries. USCIS has reported working to develop system enhancements that will facilitate accurate systems checks, but has not specifically stated whether these enhancements will enable it to check filing limits for all petitioners, consistent with IMBRA. IMBRA also mandates that USCIS notify petitioners, upon approval of a second visa petition, that their filing information is being tracked. In addition, after a second approval, upon filing of a third petition within 10 years of the first filing, USCIS is to notify both the petitioner and beneficiary of the number of previously approved petitions filed by the petitioner. The instructions to the form I-129F inform the petitioner of the circumstances under which repeat filings will be tracked. USCIS officials told us that the filing instructions to form I-129F are the mechanism by which petitioners are informed that multiple filings will be tracked. If USCIS determines a petitioner to be a multiple filer, but approves the petition under its waiver authority, the approval notice will indicate whether the approval is the 2nd, 3rd, 4th, etc. Under IMBRA, USCIS is responsible for notifying beneficiaries of prior petition approvals, in contrast with the disclosure of criminal conviction information, which is to be made by DOS. USCIS officials told us that initially they attempted to notify beneficiaries, as required. However, USCIS no longer tries to notify beneficiaries because of the reported difficulty in obtaining accurate overseas mailing addresses. Because of inaccurate addresses, USCIS officials stated that a large number of notifications were returned as undeliverable. Because of the difficulties reported with overseas mail and the amount of undeliverable mail the USCIS received, USCIS officials told us they plan to discuss a process with DOS to have its consular officers assume this notification responsibility and notify beneficiaries of prior petition approvals during the consular interview, as consular officers do with criminal conviction information. At the time of our review, DOS officials said they had not held a discussion with USCIS about providing such notification. According to USCIS officials, USCIS has drafted a memo to its adjudicators addressing the planned notification process. However, in the absence of an agreement with DOS to undertake this notification responsibility, beneficiaries are not currently being notified about the number of previously approved fiancé(e) or spousal petitions, as required by IMBRA. Once USCIS forwards the petitioner’s criminal conviction information to DOS, IMBRA requires that DOS disclose this criminal background information and information regarding any protection orders related to the petitioner to the beneficiary. In addition to disclosing this information by mail as discussed above, DOS is to provide this information to the beneficiary, in the beneficiary’s primary language, during the visa application interview. Officials from six of the seven consular posts we contacted told us that they were disclosing this information to the beneficiaries in either the beneficiary’s primary language or a language common to the consular officer and the beneficiary if translation services into the beneficiary’s primary language were unavailable. An official from the remaining post told us that consular officers at her post had not been disclosing a petitioner’s criminal history to beneficiaries, but based upon our inquiry, she now planned to have consular officers begin disclosing criminal history information to beneficiaries. At the time of our interviews with consular officials during February 2008, DOS had not yet issued formal guidance to its consular posts regarding implementation of IMBRA. In March 2008, DOS issued guidance to its posts on implementation of IMBRA including the need to disclose petitioner criminal conviction information for certain offenses and information related to any protection orders related to the petitioner. The guidance also states that after providing any related criminal history information, consular officers should give the applicant time to decide if he or she still wishes to proceed with the visa application process. In addition to disclosing petitioner criminal background information to the visa applicant during the consular interview, IMBRA mandates that DOS ask beneficiaries if an IMB facilitated the relationship between the petitioner and beneficiary. If so, DOS is to obtain the IMB’s name from the beneficiary and confirm whether the IMB gave the beneficiary all the information required by IMBRA, such as the petitioner’s marital, criminal, and protection order history. Officials from four of the seven consular posts we contacted told us that their officers had not been inquiring about whether services of an IMB were used. Officials from two posts told us their consular staff asked questions about how the petitioner and beneficiary met, but not specifically whether the services of an IMB were used. An official from one post stated that the post’s consular staff asked questions about how the petitioner and beneficiary met as well as whether or not the services of an IMB were used. Issued after our interviews, DOS’s March 2008 guidance to consular posts states that consular officers should ask the beneficiary the questions mandated by IMBRA, including whether an IMB facilitated the relationship, what the IMB’s name was, and whether the IMB provided the information required by IMBRA. To assist in this effort, USCIS officials told us that they plan to amend the I-129F form to ask those petitioners who acknowledged using an IMB to submit the IMB’s certification that it made the required IMBRA disclosures to the beneficiary. Consular officers from six of seven offices told us that they have encountered relatively few cases where the petitioner had a criminal history. For example, officials from one consular post that processed over 11,000 K-1 and K-3 visa applications in 2007, told us that the post encounters about 20 applications per year (about 0.2 percent) where the petitioner had a criminal history involving violent IMBRA-specified offenses. Officials from two of the seven offices told us that they could not recall any fiancé(e) visa applicants that acknowledged using an IMB. However, officials from another consular post stated that they conducted a survey which indicated that about 12 percent of their fiancé(e) visa applicants used the services of an IMB. IMBRA requires that USCIS, in consultation with DOS and DOJ, develop an information pamphlet for beneficiaries to include information on the visa application process; the illegality of domestic violence, sexual assault, and child abuse in the U.S.; the legal rights of immigrant victims and the resource services available to them; child support; marriage fraud; a warning that some U.S. citizens with a history of violence may not have a criminal record; and information on requirements for IMBs under IMBRA. Once finalized, DOS is to translate the pamphlet into at least 14 foreign languages, and every 2 years, USCIS, in consultation with DOJ and DOS, shall determine at least 14 language translations for the pamphlet based on the languages spoken by the greatest concentrations of K nonimmigrant visa applicants. Beneficiaries must receive the pamphlet from consular officers during visa interviews and from USCIS adjudicators during adjustment interviews and IMBs must provide the pamphlet to their foreign clients. In addition, the pamphlet is to be posted on DHS, DOS, and consular post Web sites. Further, USCIS is to develop summaries of the pamphlet to be discussed with beneficiaries in their primary languages during consular or adjustment interviews. The pamphlet was to have been available for distribution by May 2006. As of July 2008, USCIS has yet to finalize the information pamphlet. According to USCIS officials, the time needed to coordinate with various USCIS and DHS components is one reason for delay in finalizing the pamphlet. In April 2008, USCIS officials told us that the draft pamphlet had been forwarded to DHS’ Office of General Counsel and the Office of Management and Budget (OMB) for review. The pamphlet was under review through June of 2008. IMBRA also requires that USCIS consult with nongovernmental entities with expertise in areas such as the legal rights of immigrant victims of battery and extreme cruelty in developing the pamphlet. On July 14, USCIS signed the Federal Register notice seeking public comments on the pamphlet. On July 22, the Federal Register notice and pamphlet were published and USCIS intends to utilize the 60-day public comment period to provide the public, including any interested nongovernmental organizations, an opportunity to comment on the draft pamphlet before USCIS finalizes the pamphlet for distribution and publication. USCIS officials told us they did not know when the pamphlet would be finalized, nor did they have a specific time frame for finalizing the pamphlet. Figure 2 below shows a timeline which illustrates key steps in the development of the information pamphlet. Since USCIS has not finalized the information pamphlet, DOS has been unable to translate it. DOS officials told us that once they receive the finalized pamphlet, the translations would be done expeditiously. Until the pamphlet is finalized, translated, and distributed USCIS increases the risk that beneficiaries are not being made aware of their rights or the resources that are available should they encounter domestic violence. As discussed earlier in this report, IMBRA establishes federal civil and criminal penalties for IMBs who violate its provisions. However, USCIS, DOS and DOJ officials told us that there was no framework in place for enforcing the provisions related to potential IMBRA violations by IMBs. DOJ and DOS officials told us that the agencies are discussing these issues, but they could not tell us when a framework would be in place. DOJ officials told us that the Office of the Chief Administrative Hearing Officer, within DOJ’s Executive Office for Immigration Review (EOIR), would likely hear civil cases under IMBRA, just as it hears civil cases under the INA. The Chief Administrative Hearing Officer within EOIR had drafted IMBRA-related regulations regarding how civil penalties would be administered, but these regulations cannot be finalized until the agencies decide who will be responsible for investigating, referring, and ultimately prosecuting potential violations at a hearing before DOJ’s Chief Administrative Hearing Officer. DOJ officials stated that since IMBRA has been enacted, there have been no civil or criminal cases brought against IMBs. Without a framework for enforcement of the IMBRA provisions, it will be difficult for IMBRA violators to be prosecuted and assessed applicable penalties. As part of our study on the impact of IMBRA on the K nonimmigrant visa process, IMBRA mandated that USCIS and DOS collect and maintain specific data for us to report. This required data included changes in the number of fiancé(e) petitions filed and the extent to which petitioners had one or more criminal convictions or a history of violence, including how many of those petitioners had used services of an IMB or had been granted a waiver of IMBRA’s filing limits. Although IMBRA mandated that USCIS and DOS collect and maintain the data necessary for us to conduct such a study, the data we are requested to report on are essentially petition- driven data for which USCIS would have responsibility to collect and maintain. While USCIS has collected some data necessary for this study, most of the data IMBRA calls for us to report are not available in a summary or reportable format. For example, USCIS provided us with data on the number of fiancé(e) petitions filed in the past three fiscal years. However, data on which petitioners had criminal convictions or a history of violence were not available in a summary or reportable format. That is, although the I-129F petition asks the petitioner to list specified criminal convictions, USCIS does not capture this data electronically. Table 2 lists the petition-related data elements that IMBRA requires GAO to report on and whether the data were available in a summary or reportable format. Available USCIS data shows that the number of I-129F petitions filed since IMBRA was passed declined slightly from about 66,200 in fiscal year 2006 to about 62,500 in fiscal year 2007. The approval rate for fiancé(e) petitions decreased slightly from about 86 percent to about 81 percent over the same time period. In fiscal year 2007, of the approximately 62,500 petitions filed, 309 (less than 1 percent) petitioners applied for a waiver of the filing limitations through the service centers. USCIS service centers approved 308 waiver applications and denied one; however, the reasons for the approval and denial decisions are unknown since USCIS does not currently maintain such data. In fiscal year 2007, the California Service Center reported 1,529 petitions filed by people who had previously filed. During that same period, USCIS service centers reported 176 petitions filed by concurrent filers. USCIS officials told us that for the data elements currently not available in summary or reportable format they are actively seeking to modify the CLAIMS application management system to capture this data. These modifications would include, for example, creating new data fields in CLAIMS to (1) capture the reason a petition is denied or granted a waiver, such as whether USCIS denied a petition based on filing limitations, and (2) identify those petitioners that had a criminal conviction or a history of violence. USCIS officials told us that they were in the process of developing a business case to modify CLAIMS for USCIS senior management review. If USCIS senior management approves the business case, then the changes to CLAIMS would be made. USCIS officials were unable to provide a time frame for when the review of the business case would be completed and, if approved, when the changes to CLAIMS would be made. The purpose of IMBRA is to address issues of domestic violence and abuse against beneficiaries by petitioners, including those who met their foreign- born spouses through an international marriage broker. While DHS/USCIS, DOS, and DOJ have taken some steps to implement IMBRA, certain IMBRA requirements have yet to be fully implemented, increasing the risk that beneficiaries are not fully aware of their petitioner’s background or their rights. DOS’s current procedures do not provide for mailing the approved I-129F petitions to beneficiaries. During the course of this review, DOS informed us that it will revise its guidance to require the I- 129F petition to be mailed to beneficiaries when it mails the visa instruction packet. USCIS’s current procedures do not ensure that all petitioners are within IMBRA’s multiple filing limitations because USCIS does not check all petitioners for prior filings. Beneficiaries are not being notified of the number of previously approved petitions filed by their petitioner as required by IMBRA. Without developing a mechanism to check all petitioners for prior filings nor a mechanism for sharing information with beneficiaries about the number of previously approved petitions filed by their petitioner, beneficiaries may not have all the information they need to make an informed decision about whether to immigrate to the United States. Until the information pamphlet is finalized, translated and distributed, USCIS increases the risk that beneficiaries are not being made aware of their rights or the resources that are available should they encounter domestic violence. Without a legal framework for enforcing IMBRA’s provisions, it will be difficult for IMBRA violators to be prosecuted and assessed applicable penalties. We could not determine the reason a petitioner was denied or granted a waiver or the extent to which petitioners with a criminal history or history of violence have filed K nonimmigrant petitions, including how many of those petitioners used the services or an IMB or received waivers of IMBRA’s filing limits, because USCIS did not collect and maintain this data in a summary or reportable format for this study. Should USCIS modify its CLAIMS system to capture this data, USCIS may be able to report this information in the future. To improve implementation of IMBRA and to help ensure that beneficiaries receive all IMBRA-required information, we recommend that the Director, USCIS: Develop a mechanism to check all petitioners for prior filings to determine if a petitioner exceeds the filing limits established by IMBRA. Develop a mechanism, in consultation with the Secretary of State, to implement the IMBRA requirement that beneficiaries be notified regarding the number of previously approved petitions filed by the petitioner. Develop a timeframe for finalizing the information pamphlet so that it can be translated and distributed as required by IMBRA. In order to help ensure that penalties can be imposed on IMBRA violators as provided for in the law, we recommend that the Secretary of Homeland Security In consultation with the Secretary of State and the Attorney General, develop a framework for the investigation, referral, and prosecution of potential IMB violations of IMBRA. We provided a draft of this report to the Secretaries of Homeland Security and State and the Attorney General for review and comment. We received comments from DHS that are reprinted in appendix II. DHS concurred with our recommendations and outlined actions that USCIS and DHS are undertaking to address them. DOS provided technical comments, which we have incorporated into the report as appropriate. DOJ did not provide comments. In its comments, DHS stated that USCIS is actively seeking to modify the CLAIMS database in order to provide an automated solution for identifying multiple filers. Additionally, USCIS is in the process of refining the search criteria to reduce multiple matches on people with common last names. DHS also stated that USCIS was actively working with DOS to ensure that DOS is aware of USCIS annotations on the I-129F petitions indicating multiple filings so that DOS can inform its consular officers to notify beneficiaries of the number of previously approved petitions filed by the petitioner. Further, DHS stated that USCIS has taken a major step toward meeting the report’s recommendation to finalize the IMBRA-required information pamphlet by seeking public comment on the draft pamphlet through the publishing of the Federal Register notice and pamphlet on July 22, 2008. Lastly, DHS stated that it plans to consult with the departments of State and Justice to implement the necessary framework for the investigation, referral, and prosecution of potential IMBRA violations and that work towards implementing this recommendation is under way. We agree that these actions should help improve the implementation of IMBRA and help ensure that beneficiaries receive all IMBRA-required information as well as provide for the imposition of penalties on IMBRA violators as provided for in the law. We are sending copies of this report to the Secretaries of Homeland Security and State and the Attorney General. We will send copies to other interested parties and make copies available to others who request them. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8777 or at [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Create a database to track multiple visa petitions filed for fiancé(e)s and spouses. Upon approval of 2nd visa petition for a fiancé(e) or spouse, notify petitioner that multiple filings are being tracked; after 2nd petition approval, if another is filed within 10 years of first, notify petitioner and beneficiary of number of previous approvals. Obtain petitioner’s criminal history information. USCIS is to provide a copy of the petition and any petitioner criminal background information to DOS, which must mail such documentation to the beneficiary, along with other materials such as the domestic violence information pamphlet. During the visa interview, DOS is to: Share petitioner’s criminal background information with the visa applicant (beneficiary) in his/her primary language. DOS may not disclose a victim’s name or contact information, but shall disclose the relationship of the victim to the petitioner. DOS shall inform the visa applicant that criminal background information is based on the available records and may not be complete. Provide visa applicant a copy of the domestic violence information pamphlet and an oral summary in the applicant’s primary language. Ask visa applicant, in the applicant’s primary language, whether an international marriage broker facilitated the relationship between the applicant and the U.S. petitioner, and if so, obtain the identity of the IMB from the applicant and confirm that the IMB provided the applicant required information and materials. Develop information pamphlet and pamphlet summaries for K nonimmigrants on domestic violence rights and resources, in consultation with DOS and DOJ. Translate information pamphlet into at least 14 foreign languages specified by IMBRA. Every 2 years, translate pamphlet into at least 14 more languages, as specified by USCIS in consultation with DOS and DOJ. IMBRA specifies four methods of disseminating the information pamphlet: By mail: DOS to mail pamphlet to visa applicant when it mails the visa instruction packet and a copy of the petition (including any criminal history information). Pamphlet shall be in the primary language of the applicant or in English if no translation is available. During interview: USCIS and DOS to distribute information pamphlet to beneficiary at any USCIS adjustment interview or DOS visa interview. At the respective interview, USCIS and DOS officers must also review pamphlet summary with the applicant in his/her primary language. On the web: DHS, DOS and its consular offices must post pamphlet on their websites. By the IMB: IMBs are to provide the pamphlet to their foreign clients in the clients’ primary language. Impose federal civil penalties associated with IMB violations of IMBRA. IMBs cannot provide any individual or entity with the personal contact information, photograph, or general information about the background or interests of any individual under the age of 18 Search the National Sex Offender Registry or State sex offender public registry; collect background information about the U.S. client such as U.S. client’s marital and criminal history, including protection orders or restraining orders. Before providing U.S. client or his/her representative with the personal contact information of any foreign client, IMB must collect the U.S. client’s sex offender registry and background information, share such information with the foreign client, and obtain the foreign client’s signed, written consent to share his/her personal contact information with the U.S. client. IMBs shall not provide the foreign national client’s personal contact information to any person or entity other than a United States client. IMBs shall not disclose to the beneficiary the name or location of any victim of the U.S. client, but shall disclose the relationship between the U.S. client and the victim. In addition to the contact named above, Michael Dino, Assistant Director, and Carla Brown, Analyst-in-Charge, managed this assignment. Lemuel Jackson and James Russell made significant contributions to the work. Stanley Kostyla and James Ungvarsky assisted with design, methodology and data analysis. Adam Vogt provided assistance in report preparation; Christine Davis and Willie Commons III provided legal support; Karen Burke and Tina Cheng developed the report’s graphics.
|
The International Marriage Broker Regulation Act of 2005 (IMBRA) was enacted to address issues of domestic violence and abuse against noncitizens (beneficiaries) married or engaged to U.S. citizens (petitioners) who have petitioned for them to immigrate to the U.S., including those who met through an international marriage broker (IMB). IMBRA mandated that GAO study the act's impact on the visa process for noncitizen spouses and fiance(e)s. This report addresses the extent to which the U.S. Citizenship and Immigration Services (USCIS), a component of the Department of Homeland Security (DHS); the Department of State (DOS); and the Department of Justice (DOJ) have implemented IMBRA, and the extent to which USCIS and DOS have collected and maintained data for this GAO report as required by IMBRA. To address these objectives, GAO reviewed the act and related legislation, analyzed IMBRA implementation guidance and available data on applications filed, and interviewed officials at USCIS, DOS, and DOJ. USCIS, DOS, and DOJ have implemented two of seven key IMBRA requirements identified by GAO, but five key provisions intended to provide beneficiaries with information about the petitioners seeking to bring them to the United States have yet to be completed. First, although IMBRA requires DOS to mail a copy of the approved petition to each beneficiary, the agency is not currently fulfilling this requirement. Second, IMBRA limits the number of petitions a person may file for a noncitizen fiance(e) unless USCIS grants a waiver of the filing limits. However, USCIS officials told GAO that they do not check all petitioners against records to determine if a petitioner has previously filed a fiance(e) petition. USCIS adjudicators are required to check the record if the petitioner self-attests that he or she has previously filed a petition. By limiting its checks to those petitioners who acknowledge prior filings, USCIS increases the risk that it will approve more fiance(e) petitions than allowed under IMBRA. Third, IMBRA mandates that after two approved petitions, upon filing of a third petition within 10 years of the first, USCIS is to notify both the petitioner and beneficiary of previously approved petitions filed by the petitioner. USCIS officials told GAO that they no longer try to notify beneficiaries because of the difficulty in obtaining accurate overseas mailing addresses. Thus, beneficiaries are left without all required information for making a decision about the person petitioning on his or her behalf. USCIS officials told GAO that they plan to ask DOS to notify beneficiaries during their visa interview with a DOS consular officer. Fourth, the requirement to provide beneficiaries with a pamphlet that discusses the visa application process and available resources if the beneficiary encounters domestic violence or abuse is not being met. USCIS has drafted the pamphlet, but has not established time frames for finalizing the pamphlet. Until the pamphlet is finalized, translated, and distributed, USCIS increases the risk that beneficiaries are not being made aware of their rights or the resources that are available should they encounter domestic violence. Lastly, IMBRA establishes federal criminal and civil penalties for IMBs who violate its provisions. Although DOJ has drafted IMBRA-related regulations regarding how civil penalties would be administered, these regulations cannot be finalized until DOJ, USCIS, and DOS decide which agencies will be responsible for investigating, referring, and prosecuting potential IMBRA violations. Until the agencies resolve these issues and establish an enforcement framework, it will be difficult for IMBRA violators to be penalized. USCIS is collecting and maintaining some of the data for this report as required by IMBRA; however, most of the data are not in a summary or reportable form and other required data have not been collected. For example, GAO could not determine the extent to which petitioners with a criminal history or history of violence have filed petitions because USCIS does not capture this data electronically. USCIS officials told GAO that they are considering modifying their data management system to collect data that is currently not being collected. However, no decisions have been made.
|
Federal government contractors, including defense small businesses, face an evolving array of cyber-based threats. As we testified in April 2015, risks to cyber-based assets can originate from both unintentional and intentional threats. Unintentional threats can be caused by, among other things, defective computer or network equipment, and careless or poorly trained employees. Intentional threats include both targeted and untargeted attacks from a variety of sources, including criminal groups, hackers, disgruntled employees, foreign nations engaged in espionage and information warfare, and terrorists. Threat sources vary in terms of the capabilities of the actors, their willingness to act, and their motives, which can include monetary gain or political advantage, among others. For example, adversaries possessing sophisticated levels of expertise and significant resources to pursue their objectives—sometimes referred to as “advanced persistent threats”— pose increasing risks. Table 1 presents various sources of cyber threats. Threat sources make use of various techniques—or exploits—that may adversely affect information computers, software, networks, and operations. Table 2 presents various types of cyber exploits. The number of information security incidents reported by federal agencies to the U.S. Computer Emergency Readiness Team increased from 5,503 in fiscal year 2006 to 67,168 in fiscal year 2014, an increase of 1,121 percent. DOD has tasked DOD OSBP with ensuring that small businesses receive a fair proportion of DOD purchases, contracts, and subcontracts for property and services. This office is responsible for providing small business policy advice to the Office of the Secretary of Defense and for providing policy oversight to DOD military department and DOD component small business offices. Those offices are responsible for ensuring that small businesses are afforded the maximum practicable opportunity to participate in DOD acquisitions and establishing challenging small business program goals. DOD Directive 4205.01 states that DOD OSBP should take part in outreach and education on small business issues within DOD. Specifically, the directive states that DOD OSBP should establish and support a small business training program for Small Business Specialists and other acquisition personnel within DOD. In addition, DOD OSBP provides resources on the DOD OSBP website, and according to DOD OSBP officials, DOD OSBP also supports small businesses by providing outreach and education by attending small business conferences. However, according to DOD OSBP officials, none of those responsibilities requires the office to integrate cybersecurity into current or new outreach and education efforts. While DOD OSBP is responsible for leading DOD’s efforts to support small business initiatives, other DOD components support defense small businesses. For example, Military department small business office officials told us that their offices routinely provide outreach and education to small businesses on topics such as how to bid on DOD contracts; however, these efforts do not include outreach and education on cybersecurity issues. DOD Chief Information Officer manages the Defense Industrial Base Cyber Security/Information Assurance Program to address the pressing need to stem the risk posed by cyber attacks against defense industrial base businesses. According to DOD Instruction 5205.13, the program established a comprehensive approach for protecting unclassified DOD information transiting or residing on unclassified defense industrial base information systems and networks by incorporating the use of intelligence, operations, policies, standards, information sharing, expert advice and assistance, incident response, reporting procedures, and cyber intrusion damage assessment solutions to address a cyber advanced persistent threat. It was designed to establish a voluntary framework to prevent unauthorized access to DOD program information or the intellectual property of industry. This program is available only to a subset of small companies since participating companies must be approved to maintain classified information. The Defense Security Service manages the National Industrial Security Program for the Undersecretary of Defense for Intelligence, which governs cleared contractor companies and their cleared employees who support DOD and other federal agencies. The Defense Security Service has security oversight of these contractors, and provides them with related counterintelligence services and security education, awareness, and training; this includes the foreign threat to cleared contractors as an aspect of the counterintelligence mission. DOD has designated the Defense Security Service as its provider of security professional training and security awareness products for DOD personnel and for the cleared contractors. Under the National Industrial Security Program, the Defense Security Service receives, analyzes, and shares information on activity involving cleared contractor’s networks, regardless of the classification that may reflect foreign intelligence interests and acts in close coordination with the Federal Bureau of Investigation and other federal law enforcement and counterintelligence community members. In November 2013, DOD updated its Defense Federal Acquisition Regulation Supplement to include a contract clause that requires defense contractors and subcontractors to safeguard unclassified controlled technical information on their unclassified information systems from unauthorized access and disclosure and to report certain cyber incidents to DOD. In addition, under the required clause, contractors are to implement minimum privacy and security controls such as risk assessments that were developed by the National Institute of Standards and Technology, among other requirements. The 2015 DOD Cyber Strategy states that DOD must work with the private sector to help secure defense industrial base trade data. Furthermore, to safeguard critical programs and technologies, the strategy states that DOD will work with companies to develop alert capabilities and build layered cyber defenses. DOD OSBP officials have explored some ways whereby the office could integrate cybersecurity into its existing outreach and education efforts; however, as of July 2015, the office had not identified and disseminated information about cybersecurity resources in its outreach and education efforts to defense small businesses. While DOD OSBP is not required to educate small businesses on cybersecurity, DOD OSBP officials acknowledged that cybersecurity is an important and timely issue for small businesses—and therefore the office is considering incorporating cybersecurity into its existing outreach and education efforts. In response to our review, DOD OSBP officials contacted DOD Chief Information Officer officials in April 2015 to discuss options for integrating cybersecurity into their existing outreach and education efforts. According to DOD OSBP officials, the DOD OSBP and DOD Chief Information Officer officials discussed the development of training materials, such as online videos and brochures that could be distributed at small business conferences. The purpose of such training materials would be to help defense small businesses in understanding cybersecurity best practices, and the cybersecurity requirements identified in the Defense Federal Acquisition Regulation Supplement. According to DOD OSBP officials, the office also invited a DOD Chief Information Officer official to join a DOD OSBP representative to speak with small businesses about cybersecurity and to distribute a handout on cybersecurity and cyber business opportunities at a small business conference held in April 2015. In addition, recognizing that DOD’s small business offices may not be staffed by cybersecurity experts, DOD OSBP officials stated that they plan to add a cybersecurity component to a training curriculum that they are currently developing along with DOD Chief Information Officer for professionals who work in DOD small business offices. However, these efforts have not been completed and, as of July 2015, DOD OSBP had not identified or disseminated cybersecurity resources to defense small businesses that the businesses could use to understand cybersecurity and cyber threats. We identified 15 existing federal cybersecurity outreach and education resources that the office could leverage for defense small businesses. For example: DOD’s Defense Security Service offers online cybersecurity training programs that are available to the public on topics such as cybersecurity awareness, the National Institute of Standards and Technology’s Risk Management Framework, insider threats, and security controls through its public website. According to Defense Security Service officials, DOD small businesses could use the online training programs to improve their knowledge of cybersecurity. The U.S. Small Business Administration maintains a learning center that provides a 30-minute online program—available to small businesses—that covers cybersecurity concepts for small business. Topics include identifying and securing sensitive information, types of cyber threats, risk management, and best practices for guarding against cyber threats. The Department of Homeland Security, in coordination with the National Cyber Security Alliance and the Anti-Phishing Working Group, provides cyber awareness resources to the public—including cybersecurity awareness videos and tip sheets—on its Stop.Think.Connect website and facilitates cybersecurity awareness events targeted to various audiences, including businesses. This resource is available to defense small businesses. The Federal Communications Commission hosts a planning tool on its website, known as the FCC Small Biz Cyber Planner 2.0 that is targeted to small businesses and available to the public. This online planner provides guidance to small businesses on developing their cybersecurity plans and is available to defense small businesses. See appendix II for a listing of the 15 resources we identified. While DOD OSBP officials recognized the importance of identifying and disseminating cybersecurity resources through outreach and education efforts to defense small businesses, they also identified a number of factors that had, to date, limited their progress in doing so. Specifically, DOD OSBP officials were not aware of existing cybersecurity resources such as those we identified when we met with them in June 2015, there had been leadership turnover within the office, and the office had been focused on one of its key initiatives—developing the training curriculum for DOD professionals who work with small businesses. OSBP officials also stated that they had been focused on their statutory requirements such as training the DOD workforce that works with small businesses, advocating for small businesses within the government, and reaching out to small businesses in the private sector. While we recognize that these factors could affect progress, federal government internal controls state that management should ensure that there are adequate means of communicating with, and obtaining information from, external stakeholders who may have a significant impact on the agency’s achieving its goals. While they had not yet identified or disseminated information about existing cybersecurity resources to defense small businesses, officials agreed that doing so could help the businesses to become more aware of cybersecurity practices and cyber threats. In addition, by identifying and disseminating this information, DOD OSBP could help defense small businesses to protect their networks against cyber exploits, which would support the 2015 DOD Cyber Strategy goals of working with the private sector to help secure defense industrial base trade data and build layered cyber defenses. Furthermore, by identifying existing federal government resources, OSBP’s efforts would be in line with DOD Instruction 5134.04, which states that the OSBP Director shall use the existing services and systems of DOD and other federal agencies, when practicable, to avoid duplication and to achieve maximum efficiency and economy. Finally, once OSBP has identified the resources, it can also share them with military department and component small business offices so that they can use them for their own outreach and education efforts with defense small businesses. DOD spends billions of dollars contracting with defense small businesses, and relies on these businesses to support its missions. However, defense small businesses face challenges in protecting their corporate networks and information from increasing cyber threats. While DOD OSBP officials have recognized the importance of educating defense small businesses about cybersecurity, they have not identified and disseminated cybersecurity resources through their outreach and education efforts to businesses because they have been focused on other priorities, such as developing a training curriculum for DOD professionals who work with small businesses. By identifying and disseminating information about existing cybersecurity resources to defense small businesses, these businesses may be made more aware of cybersecurity practices and cyber threats, thereby potentially assisting them in protecting their networks against cyber exploits. By leveraging resources that DOD components and other federal agencies have already developed, some of which have been identified in this report, DOD OSBP will be able to spend more time focusing on other priorities such as developing the training curriculum. To better position defense small businesses in protecting information and networks from cyber threats, we recommend that the Secretary of Defense direct the Director of the DOD OSBP, as part of its existing outreach efforts, to identify and disseminate cybersecurity resources to defense small businesses. We provided a draft of this report to DOD, the Federal Communications Commission, the Department of Homeland Security, the Department of Justice, the U.S. Small Business Administration, and the National Institute of Standards and Technology for review and comment. DOD’s written comments are included in appendix III. DOD concurred with our recommendation that the Secretary of Defense direct the Director of the DOD OSBP, as part of its existing outreach efforts, to identify and disseminate cybersecurity resources to defense small businesses. DOD stated that understanding the essential need to protect DOD critical networks, information and infrastructure—including those within defense small businesses—DOD OSBP, with support from the DOD Chief Information Officer, is expanding its current cybersecurity awareness and outreach programs. DOD stated further that the resources we identified reflect a thorough assessment of available federal capabilities and are very helpful to any organization conducting cybersecurity education for its stakeholders. DOD noted that future outreach by the DOD OSBP will increase awareness of cybersecurity education and training resources to defense small businesses. Finally, DOD noted that OSBP will also increase awareness of the cybersecurity education resources among the DOD small business workforce through training events and education programs, and by issuing guidance to the military departments and defense agencies. DOD OSBP added in its technical comments that the office has new leadership and staff in place to expand cybersecurity education for small businesses. The office also noted that it is using a measured approach involving existing information resources and inclusion of cybersecurity information in the development of DOD workforce training and the ongoing creation of outreach materials and presentations. We believe that by identifying and disseminating cybersecurity information, DOD OSBP will help defense small businesses to become more aware of cybersecurity practices and cyber threats and help them protect their networks against cyber exploits. In technical comments on the report, the Federal Communications Commission identified the following additional federal cybersecurity resource—the Communications Security, Reliability and Interoperability Council - Cybersecurity Risk Management and Best Practices Working Group 4: Final Report. The report’s appendix provides cybersecurity risk management and best practice recommendations for small and medium business interests, includes potential challenges and barriers to best practice implementation, and includes a compilation of cybersecurity resources available to small businesses. Although the report appendix is intended for small and medium businesses in the communications industry, it is also publicly available online to defense small business contractors. DOD provided additional technical comments that we incorporated as appropriate. The Department of Homeland Security and the National Institute of Standards and Technology also provided technical comments that we incorporated as appropriate. The U.S. Small Business Administration and the Department of Justice did not comment on the report. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretary of Homeland Security; the U.S. Attorney General; the Chairman of the Federal Communications Commission; the Director of the National Institute of Standards and Technology; and the Administrator of the U.S. Small Business Administration. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9971 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. We focused our review on the Department of Defense (DOD) Office of Small Business Programs (OSBP) because this office is responsible for providing small business policy advice to the Office of the Secretary of Defense and for providing policy oversight to DOD military department and DOD component small business offices, per DOD Directive 4205.01. While other DOD components such as the DOD Chief Information Officer and military department small business program offices may interact with small businesses, we found these other components either do not exclusively focus their efforts on small businesses or they rely on DOD OSBP policy and guidance with regard to working with small businesses. To address the extent to which the DOD OSBP has integrated cybersecurity into its existing outreach and education efforts for defense small businesses, we analyzed documentation and interviewed officials from the DOD OSBP about its existing cybersecurity outreach and education efforts to small businesses. We also discussed with DOD OSBP officials any challenges or limitations to their ability to conduct cybersecurity outreach and education. As part of this review, we evaluated the office’s efforts by comparing information on their activities with Standards for Internal Control in the Federal Government. By reviewing agency websites, interviewing agency officials, and searching literature on cybersecurity resources, we determined that there was no central repository of federal cybersecurity resources that could be leveraged by the DOD OSBP to share with defense small businesses. In the absence of such a central repository, we reviewed documentation and interviewed officials from the following organizations with cybersecurity expertise: DOD Chief Information Officer, Defense Security Service, Defense Information Systems Agency, Department of Homeland Security National Protection and Programs Directorate, Federal Bureau of Investigation Cyber Division, National Institute of Standards and Technology, U.S. Small Business Administration, Federal Communications Commission, and the National Cyber Security Alliance in order to identify examples of existing cybersecurity outreach and education programs potentially available to defense small businesses that could be leveraged by the DOD OSBP. We limited the scope of our research to cybersecurity outreach and education programs that were managed or funded by federal agencies. To confirm that these resources were accessible to defense small businesses, we visited the websites where these resources were publicly available during May 2015 or interviewed agency officials. To validate that the resources were relevant to cybersecurity, we reviewed information on the resource websites to confirm that each resource contained some level of cybersecurity information. We may not have identified all of the resources available or interviewed all knowledgeable parties, and we did not assess the quality of the selected resources. To describe the approximate size of DOD’s current small business community, we aggregated obligations data for DOD’s prime contractors coded as small in the Federal Procurement Data System-Next Generation database for fiscal year 2014. This database does not collect data on subcontractors to defense businesses, so the reported data likely underestimate the size of the DOD small business community. We compared this data to the U.S. Small Business Administration Fiscal Year 2014 Small Business Goaling Report and found the data to be sufficiently reliable to provide an overview of DOD’s spending on prime contracts with small businesses. We define the terms “cybersecurity” and “threat” in the introduction of the report using definitions from the National Institute of Standards and Technology and we define “small business” in the introduction of the report using DOD’s methodology. We conducted this performance audit from February 2015 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, GAO staff who made significant contributions to this report include Tommy Baril (Assistant Director), Tracy Barnes, David Beardwood, Kevin Copping, and Patricia Farrell Donahue. Information Security: Cyber Threats and Data Breaches Illustrate Need for Stronger Controls across Federal Agencies. GAO-15-758T. Washington, D.C.: July 8, 2015. Insider Threats: DOD Should Strengthen Management and Guidance to Protect Classified Information and Systems. GAO-15-544. Washington, D.C.: June 2, 2015. Cybersecurity: Actions Needed to Address Challenges Facing Federal Systems. GAO-15-573T. Washington, D.C.: April 22, 2015. Information Security: Agencies Need to Improve Oversight of Contractor Controls. GAO-14-612. Washington, D.C.: August 8, 2014. Information Security: Agencies Need to Improve Cyber Incident Response Practices. GAO-14-354. Washington, D.C.: April 30, 2014. Information Security: Federal Agencies Need to Enhance Responses to Data Breaches. GAO-14-487T. Washington, D.C.: April 2, 2014. Federal Information Security: Mixed Progress in Implementing Program Components; Improved Metrics Needed to Measure Effectiveness. GAO-13-776. Washington, D.C.: September 26, 2013. Government Contracting: Federal Efforts to Assist Small Minority Owned Businesses. GAO-12-873. Washington, D.C.: September 28, 2012. Defense Cyber Efforts: Management Improvements Needed to Enhance Programs Protecting the Defense Industrial Base from Cyber Threats. GAO-12-762SU. Washington, D.C.: August 3, 2012. This report is restricted to official use only and is not publicly available. Information Security: Cyber Threats Facilitate Ability to Commit Economic Espionage. GAO-12-876T. Washington, D.C.: June 28, 2012. Cybersecurity: Threats Impacting the Nation. GAO-12-666T. Washington, D.C.: April 24, 2012. IT Supply Chain: National Security-Related Agencies Need to Better Address Risks. GAO-12-361. Washington, D.C.: March 23, 2012.
|
Small businesses, including those that conduct business with DOD, are vulnerable to cyber threats and may have fewer resources, such as robust cybersecurity systems, than larger businesses to counter cyber threats. The Joint Explanatory Statement accompanying the National Defense Authorization Act for Fiscal Year 2015 included a provision that GAO assess DOD OSBP's outreach and education efforts to small businesses on cyber threats. This report addresses the extent to which DOD OSBP has integrated cybersecurity into its outreach and education efforts to defense small businesses. DOD OSBP's mission includes providing small business policy advice to the Office of the Secretary of Defense, and policy oversight to DOD military department and component small business offices. To conduct this review, GAO analyzed documentation and interviewed officials from DOD OSBP about its cybersecurity outreach and education efforts. GAO also analyzed documentation and interviewed officials from nine organizations selected for their cybersecurity expertise to identify examples of cybersecurity outreach and education programs potentially available to defense small businesses. The Department of Defense (DOD) Office of Small Business Programs (OSBP) has explored some options, such as online training videos, to integrate cybersecurity into its existing efforts; however, as of July 2015, the office had not identified and disseminated cybersecurity resources in its outreach and education efforts to defense small businesses. While DOD OSBP is not required to educate small businesses on cybersecurity, DOD OSBP officials acknowledged that cybersecurity is an important and timely issue for small businesses—and therefore the office is considering incorporating cybersecurity into its existing outreach and education efforts. During the review, GAO identified 15 existing federal cybersecurity resources that DOD OSBP could disseminate to defense small businesses. Source: GAO analysis of information from listed agencies. | GAO-15-777 While DOD OSBP officials recognized the importance of identifying and disseminating cybersecurity resources through outreach and education efforts to small businesses, they identified factors that had limited their progress in doing so. Specifically, they were not aware of existing cybersecurity resources, they had leadership turnover in the office, and the office was focused on developing a training curriculum for professionals who work with small businesses. While GAO recognizes that these factors could affect progress, federal government internal controls state that management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders who may have a significant impact on the agency's achieving its goals. DOD OSBP officials agreed that identifying and disseminating information about existing cybersecurity resources to defense small businesses could help small businesses be more aware of cybersecurity practices and cyber threats. In addition, by identifying and disseminating this information, DOD OSBP could help small businesses to protect their networks, thereby supporting the 2015 DOD Cyber Strategy goals of working with the private sector to help secure defense industrial base trade data and build layered cyber defenses. GAO recommends that DOD identify and disseminate cybersecurity resources to defense small businesses. DOD concurred with the recommendation and agreed to implement training events and education programs.
|
Since the late 1970s, China has introduced a variety of market reforms to liberalize its centrally planned economy. Today, China is much more developed, open, and market oriented, such that now almost all sectors of its economy have elements of both free markets and state planning. However, China still lacks considerable transparency in its trade regime and intervenes in its economy in ways that can distort trade. For example, China restricts its imports by applying high tariffs to specific sectors, using import quotas, requiring import licenses, imposing other import barriers, and promoting and supporting its exports. In combination with macroeconomic forces, these trade practices have fostered a Chinese balance of trade surplus with the world and a rapid buildup in foreign reserves. In addition to these factors, the U.S. trade deficit with China has recently grown because U.S. demand for goods from China has grown more rapidly than Chinese demand for U.S. goods. Over the past 4 years this bilateral U.S. trade deficit has risen from $30 billion to $50 billion. Top U.S. exports to China tend to consist of high-technology goods, such as aircraft, while the top U.S. imports from China include many low-technology products such as toys and apparel. U.S. imports from China also include products such as electrical machinery. The United States has negotiated with China to open its markets through numerous bilateral trade agreements, including a 1979 agreement that approved reciprocal MFN status between the two countries (for more information on U.S.-China bilateral trade agreements, see app. I). Since 1986, China has been negotiating to join the WTO and its predecessor, the General Agreement on Tariffs and Trade (GATT). China will have to make significant changes to its economy to be able to meet WTO commitments (for a brief discussion of China’s economy, see app. II). Existing WTO members and countries that agree to join the WTO must abide by a set of rules and obligations that promote trade and increase transparency and fairness in the world trading system. Nondiscrimination toward other WTO members is a fundamental principle in the WTO agreements and is embodied in the granting of MFN status and providing national treatment. Generally WTO members are obligated under the MFN principle to grant each other trade privileges as favorable as they give to any other foreign country. National treatment requires that they treat other member’s products no less favorably than they treat their own, once foreign goods have crossed their borders. China, like other WTO members, will also have to commit to reduce tariffs for industrial and agricultural products and to follow rules designed to limit the use of trade-distorting nontariff barriers (such as subsidies and import licensing requirements). Additional WTO rules cover financial and other services, trade-related investment measures, market access, and trade-related intellectual property rights. WTO members have access to dispute settlement procedures designed to help them more quickly address other members’ trade practices that appear to violate WTO rules. China has sought to join the WTO as a developing country, which would allow it to benefit from longer transition periods given these countries to implement WTO obligations. However, the United States and other countries have maintained that China should be treated as a developed country because of its size and status as a major world exporter. While there is debate about the true size and growth rate of China’s economy, there is no doubt that it is very large and has grown rapidly. The Organization for Economic Cooperation and Development (OECD) estimates that China’s gross domestic product (GDP) was the second largest in the world in 1997. Despite its size, however, China’s economy still is considered “developing” by World Bank estimates. According to U.S. and foreign government officials, negotiations on transition periods and other special treatment for China will be considered on a case-by-case basis since China does not fit neatly into either of these categories of development. Overall, a major objective of the administration has been to negotiate a WTO accession agreement that is “commercially meaningful to U.S. business.” China is currently negotiating with WTO members, including the United States, to join, that is, to “accede” to, the WTO. After joining, China will be bound by the commitments it makes both in the accession negotiations and in the underlying WTO agreements. A successful accession requires the applicant to make the necessary concessions to meet the commercial and trade requirements of the WTO agreements. Thus, the outcome of this process is, to some degree, already determined by the existing agreements, in contrast to traditional trade negotiations; the primary issue for debate is agreeing to what measures a country like China needs to take to assure WTO members that it can meet the requirements. Also, the applicant must negotiate the levels at which it will bind its tariffs with WTO members. Any special provisions granted to the applicant are counterbalanced by greater obligations that the applicant must fulfill. For instance, although members might allow the applicant time to phase in tariff reductions, the applicant might be required to meet additional reporting and transparency commitments during the phase-in process. The accession process begins when the applicant submits a letter of application to the WTO Director-General. China began this process in 1986 when it applied to the GATT, and renewed its application in 1995 upon the creation of the WTO. This process, diagrammed in figure 1, is comprised of four phases: (1) fact-finding, (2) negotiation, (3) WTO decision, and (4) implementation. China is currently in the negotiation phase of the accession process. Fact-finding: As figure 1 illustrates, in the first phase of fact-finding the WTO working party, assisted by the Secretariat, collects and synthesizes information on the applicant’s trade regime. The applicant submits a detailed outline of its trade policies and practices and answers questions until the working party has sufficient information to begin negotiations. Negotiation: Figure 1 shows that the second phase of the process follows a two-track approach, involving both bilateral and multilateral negotiations. On a bilateral basis, each working party member negotiates with the applicant on its specific commitments on goods and services under the WTO agreements. The applicant submits an overall market access offer as the starting point for the negotiations, detailing how the country will lower barriers to trade. Although these negotiations are conducted bilaterally, any agreement reached between two countries will apply to all WTO members, as the principle of MFN requires. In the multilateral negotiations, the working party and the applicant negotiate terms for how the applicant will adhere to WTO’s principles and technical guidelines, so that the applicant will meet the normal obligations and responsibilities of membership. For example, a country might be asked where and when it would publish new laws, in order to comply with a WTO transparency requirement. The negotiation phase results in four documents, which detail the results of the negotiations and make up the applicant’s final accession package: (1) The Consolidated Schedules: These detail the applicant’s specific market access commitments under various WTO agreements, primarily covering individual tariff lines for goods and services. They are annexed to the protocol as an integral part of the agreement. (2) The Protocol: This is usually a brief document containing the terms of accession and affirming the applicant’s adherence to WTO guidelines and principles. (3) The Working Party Report: This provides a narrative on the results of the negotiations. Frequently, the report includes specific commitments made by the applicant regarding how it will meet WTO requirements. Commitments detailed in either the report or the protocol carry the same legal weight for the applicant, according to WTO and USTR officials. (4) The Draft Decision: Written by the working party, this document affirms the working party’s consensus decision on the applicant’s bid for accession. After the working party members conclude all the negotiations and reach consensus on language detailing the terms and conditions for the applicant’s membership, they will forward the package to the General Council. WTO Decision: The third phase in figure 1 is the formal decision process, in which the General Council (comprised of all WTO members) approves (or rejects) the terms and conditions of the applicant’s package. Traditionally, the General Council reaches decisions by consensus. However, if consensus cannot be reached, the draft decision can be approved by a two-thirds majority. Any country that decides to forgo normal WTO obligations and benefits (“non-application”), including MFN, must notify the Council before the Council approves the accession package. Implementation: Finally, the last phase in figure 1 is implementation of the applicant’s WTO commitments. The applicant’s WTO obligations enter into force 30 days after the General Council’s approval and the applicant subsequently files its acceptance of membership. The accession package is part of the applicant’s WTO agreement, and the acceding country is equally bound by the provisions of the WTO agreements and the commitments in the accession package. In some cases, the applicant’s parliament or other legislative body must pass legislation to allow for accession before the applicant submits its acceptance. Applicants must also make the necessary internal adjustments as required by the accession package before the 30-day period begins. The General Council approves the draft decision, and then the applicant becomes a member. The most recent countries to join—Ecuador, Mongolia, Bulgaria, and Panama—were required to eliminate or begin to phase out most trade practices incompatible with WTO rules immediately upon accession. At this point, my statement will discuss (1) USTR’s requirement to consult with Congress before a U.S. vote on China’s WTO membership, (2) presidential determinations on China’s state trading enterprises, (3) provisions in U.S. law affecting China’s MFN status, (4) the potential use of WTO’s non-application provision if China joins the WTO, and (5) implications for the United States if non-application is invoked. Under U.S. law, USTR is required to report to and consult with appropriate congressional committees before any WTO General Council vote on an applicant’s membership when a vote would either substantially affect U.S. rights or obligations under the WTO agreement or potentially entail a change in federal law. In view of China’s importance to U.S. foreign trade and the MFN issue described in our later comments, it is clear that this consultation requirement would apply to a vote on China’s membership in the WTO. Before China joins the WTO, another United States law requires the President to make certain determinations about China’s state trading enterprises. Specifically, the President must decide (1) whether China’s state trading enterprises account for a significant share either of China’s exports, or China’s goods that are subject to competition from goods imported into China; and (2) whether these enterprises adversely affect U.S. foreign trade or the U.S. economy. If both determinations are affirmative, the WTO agreement cannot apply between the United States and China until either China enters into an agreement that addresses the operations of state trading enterprises, or legislation is enacted approving application of the WTO agreements to China. A key legislative action Congress will face before China becomes a WTO member is whether to remove China from coverage under title IV of the Trade Act of 1974. Specifically, section 401 generally requires the President to deny MFN to products from a number of countries, including China. Section 402, better known as the “Jackson-Vanik Amendment,”permits a 1-year exception when the President determines that a country, such as China, substantially complies with certain freedom of emigration objectives. The President can recommend renewal of these waivers for successive 12-month periods if he determines that further extensions will substantially promote these objectives. These recommendations must be made 30 days before the end of the previous year’s waiver period, that is, by June 3. Congress has up to 60 days from the end of the waiver period to pass a joint resolution disapproving the waiver. If necessary, Congress has an additional 15 days to override any presidential veto of such a resolution. China first received a waiver in 1980, and U.S. presidents have renewed the waiver from 1981 to most recently, on June 3, 1998. Since the Jackson-Vanik amendment provision only allows a 1-year waiver of title IV restrictions and Congress can disapprove the waiver, the administration plans to ask Congress to enact legislation that would remove China from title IV’s coverage. The administration believes that temporary, that is, conditional MFN under Jackson-Vanik conflicts with the WTO obligation to provide unconditional MFN to WTO members. In the past, Congress has passed legislation removing certain WTO/GATT members from title IV’s coverage and granting them permanent MFN. For example, in 1996, Congress enacted legislation providing the President with discretionary authority to grant permanent MFN to Bulgaria, which the President did on September 27, 1996. This approach appears to increase the administration’s leverage to obtain final commitments. At least one bill, S. 737, currently pending in Congress takes the same approach for China. Other pending bills, S. 1303 and H. R. 1712, for example, do not provide the President this kind of discretionary authority. Instead, they provide that on the day China becomes a WTO member title IV shall no longer apply, and China’s products shall receive MFN. If China becomes a WTO member and Congress has not passed legislation removing China from title IV’s coverage, the administration plans to invoke the “non-application clause” of article XIII of the WTO agreement. The “non-application clause” permits either a WTO member or an incoming member to refuse to apply WTO commitments to each other. In the past, the United States has invoked non-application when countries have joined the WTO (or GATT) and Congress had not repealed title IV of the Trade Act for the incoming member. Table 1 lists these instances. I would like to point out four important characteristics of the WTO non-application clause. A member (and, when appropriate, an incoming member): (1) must notify the WTO of its intent to invoke non-application before the new member’s terms of accession are approved by the General Council; (2) may invoke non-application and still vote to have the new member admitted to the WTO; the United States did this for Mongolia’s accession in 1997; (3) cannot invoke non-application selectively, because the clause covers all WTO obligations. For example, the United States cannot choose to withhold its WTO MFN obligation and then apply other WTO provisions to China such as dispute settlement procedures; and (4) may later rescind non-application, resulting in both parties applying all WTO rights and obligations to each other. For example, the United States did this for Romania and Hungary. If China joins the WTO and the United States invokes non-application, any MFN rights between the United States and China will come from the 1979 U.S.-China Bilateral Agreement on Trade Relations. Although neither we nor USTR have compared in detail the scope of MFN under the 1979 agreement and that provided in the WTO agreements, the coverage under the former does not appear to be as comprehensive. For example, the 1979 agreement does not establish clear MFN obligations for services and service suppliers, nor does it provide for compulsory dispute settlement procedures. For instance, if the United States believes that China has violated its WTO commitments, the United States would be unable to bring China to WTO’s dispute settlement body. An important consequence of the United States invoking WTO non-application is that if China becomes a member, it does not have to grant the United States all the trade commitments it makes to other WTO members, both in the negotiated accession package or in the underlying WTO agreements. Because U.S. businesses compete with business from other WTO members for China’s markets, this could potentially put U.S. business interests at a considerable competitive disadvantage. For example, the United States may not benefit from Chinese concessions regarding services, such as the right to establish distribution channels in China. While the United States would continue to benefit from Chinese commitments made in bilateral agreements concluded with the United States, the commitments are not as extensive as those in the WTO agreements. In summary, the size of the Chinese economy and the extent of its reform efforts create challenges for negotiators and policymakers trying to integrate China into the WTO. As part of any congressional deliberation to remove China from coverage of title IV of the Trade Act, it will be important to evaluate China’s accession package, and the advantages and disadvantages of providing China permanent MFN. This would include determining if the accession package has met the administration’s objective of producing a “commercially meaningful” agreement. Congress will be evaluating an agreement that covers a wider array of issues than those of other new WTO members with MFN restrictions. As requested, we will be working with your staff to help evaluate this agreement when it is finalized. This concludes my statement for the record. Thank you for permitting me to provide you with this information. The framework for current U.S. trade relations with China is based upon the Agreement on Trade Relations that was signed on July 7, 1979. The agreement established reciprocal Most-Favored-Nation (MFN) status between the two countries and committed both parties to protect intellectual property. Since then, the United States has attempted to increase market access and reduce trade barriers and other trade distorting policies and practices by entering into numerous bilateral trade agreements with China (see table I.1). Nevertheless, China’s implementation of these agreements has been uneven, according to the U.S. Trade Representative (USTR). China still restricts imports, subsidizes Chinese exports, and maintains significant barriers to foreign business penetration, according to USTR. For example, the United States has entered into a series of agreements with China regarding China’s protection of intellectual property rights (IPR). Under the Memorandum of Understanding on the Protection of Intellectual Property Rights signed in 1992, China amended its patent law, issued copyright regulations, joined international copyright conventions, and enacted protection for trade secrets. However, U.S. officials subsequently determined that China did not establish an adequate and effective mechanism for IPR enforcement. As a result of a Special 301investigation, the two parties signed an additional IPR agreement in 1995 in which China committed to (1) provide improved protection for copyrights, (2) strengthen border controls, (3) institute trademark law modernization, and (4) intensify a “Special Enforcement Period” aimed at cracking down on piracy. However, China’s continued insufficient implementation of the 1995 IPR agreement led the United States to threaten to impose sanctions in May 1996; the two parties avoided sanctions with the signing of an agreement in June 1996, which confirmed China’s most recent attempts to enforce the 1995 agreement. In addition, the United States and China have signed a series of bilateral trade agreements to improve the regulation and pricing of satellite launch services. In 1995, the United States and China renewed the Bilateral Agreement on International Trade in Commercial Space Launch Services for the period between 1995 and 2001. To further clarify the agreement’s provisions on low earth orbit satellites (LEO), the two countries signed an annex containing specific LEO pricing guidelines in 1997. China is undergoing a historic transformation, as market reforms and integration with the world economy create growth and increased trade and investment. In 1997, China’s second largest export market after Hong Kong was the United States; China’s bilateral trade surplus with the United States has more than doubled since 1993 to about $50 billion in 1997. However, substantial trade and investment impediments remain. According to U.S. government and foreign officials, the size of the Chinese economy and the extent of its reform efforts create challenges to negotiators and policymakers trying to integrate China into the World Trade Organization (WTO). At the end of this appendix, we provide a table with some statistics on the Chinese economy. According to our review of the economic literature, China, for almost 30 years prior to 1979, had a rural, developing economy with relatively few connections to the rest of the world. The Chinese Communists ruled over a centrally planned economy in which prices were set by the state. In late 1978, China’s leaders introduced market reforms into the agricultural sector, where 71 percent of China’s labor force worked. As agriculture yield increased, market reforms were introduced into other specific sectors and locales and then gradually expanded to other regions and sectors of the economy. Today, almost all sectors of the economy are a mix of market-oriented reforms and state planning. Experts generally credit these reforms and China’s high rates of investment and saving as principal reasons for China’s high growth rates during the last 20 years. They also credit these reforms for making China’s economy much more developed, open to international trade and investment, and market-oriented. China’s economy appears to be very large and growing quite quickly, although not as fast as indicated by the widely publicized Chinese official figures. According to official Chinese figures, in 1997 the gross domestic product (GDP) was $900 billion, per capita GDP was $724, and growth since 1986 averaged almost 10 percent per year. The most recent data from the Organization for Economic Cooperation and Development (OECD) suggest that China is the second-largest economy worldwide. In 1997, China was the world’s 10th largest exporter, 12th largest importer, and the largest trading nation that is not a WTO member. According to the economic literature, China has become increasingly open to international trade and investment since 1978, although it retains significant trade and investment barriers. China has been negotiating to join the WTO and its predecessor, the General Agreement on Tariffs and Trade (GATT), since 1986. Despite substantial reforms, China will have to make significant changes to its economy to meet WTO rules and obligations. Today, China still restricts imports, promotes and supports Chinese exports, and maintains significant barriers to foreign business penetration, according to USTR and Commerce Department reports. They report that China’s import restrictions include high tariffs for specific sectors and other taxes on imports; nontariff barriers such as import licenses, import quotas, limitations on which enterprises can import, and limitations on access to foreign exchange. They also report that China promotes exports by providing exporters with access to funds, freight services, and inputs on noncommercial terms. China also provides exporters with income tax reductions and imposes foreign exchange earning and export requirements on foreign corporations in China. According to these reports, significant barriers or impediments to foreign business operating in China include guiding foreign investment to certain sectors; protecting state-owned enterprises from competition by law, regulation, and/or custom; restricting the opening of branches of foreign banks, insurance companies, accounting, and law firms to selected companies and particular locales; and prohibiting representative offices of foreign firms from signing sales contracts or billing customers. They also report on problems from excessive bureaucracy and corruption in China, especially regarding government procurement practices. The corruption problem is confirmed by private studies; one ranked China as the fifth most corrupt among 54 countries surveyed in 1996. From 1980 until 1994, China’s trade and current account balance (its goods, services, income, and current transfers) were more often in deficit than surplus. Current account deficits generally were financed by capital inflows, particularly foreign direct investment, which grew relatively slowly until 1992, and by borrowing from the World Bank. The Chinese government’s holding of foreign exchange, the main component of its international reserves, fluctuated moderately. Beginning in 1993, China has had large capital inflows, due to substantial and growing levels of foreign direct investment. China’s goods surplus grew to $7.3 billion in 1994 from a $10.7 billion deficit in 1993. It then grew to $18 billion in 1995, and then jumped to $46.2 billion in 1997. Goods exports rose steadily from $102.6 billion in 1994 to $182.7 billion in 1997. China’s goods imports increased more slowly from $95.3 billion in 1994 to $136.5 billion in 1997. China’s goods and services balance grew steadily from $7.6 billion in 1994 to $17.6 billion in 1996, then rose to $40.5 billion in 1997 (see fig. II.1). Year At the beginning of 1994, China devalued its official exchange rate to that on the private market, and then rapidly gained foreign exchange under a managed float system at the rate of $22 billion to $35 billion per year. At the end of 1997, China’s foreign currency reserves were second only to Japan’s, at $139.9 billion. China’s surplus in goods trade with the United States has continued to increase for more than a decade. The bilateral surplus rose from $2.8 billion in 1987, to $29.5 billion in 1994 (the year China last devalued its currency), to $49.8 billion in 1997, according to U.S. Commerce Department figures. U.S. goods imports from China have rapidly grown from $6.3 billion in 1987, to $38.8 billion in 1994, to $62.6 billion in 1997. U.S. goods exports to China grew more slowly from $3.5 billion in 1987, to $9.3 billion in 1994, to $12.8 billion in 1997. By 1997, the U.S. goods deficit with China was second to that of Japan for the third straight year. In 1997, goods imports from China were 7.2 percent of all U.S. goods imports, with leading imports of electrical machinery and equipment; toys, games, and sports equipment; footwear; boilers and machinery; and clothing accessories and apparel, much of which tend to be labor intensive. In 1997, U.S. goods exports to China were 1.9 percent of all U.S. goods exports, with leading U.S. exports to China consisting of Customs categories of nuclear reactors, boilers, and machinery; aircraft and spacecraft; electrical machinery and equipment; and fertilizers. In 1997, the United States was China’s second-largest goods export market after Hong Kong, and third-largest source of goods imports after Japan and Taiwan. (See table II.1.) Table II.1: Economic Data of China: 1987-97 Billions of current U.S. dollars, unless otherwise noted $(1.7) $(5.3) $(5.6) $(10.7) (36.4) (46.4) (48.8) (42.4) (50.2) (64.4) (86.3) (95.3) (110.1) (131.5) (136.5) (4.1) (4.9) (11.5) (38.9) (50.0) (52.8) (46.7) (54.3) (73.8) (98.3) (111.6) (135.3) (154.1) (166.8) (3.8) (4.3) (11.6) (0.2) (0.5) (2.1) (3.5) (5.0) (5.8) (4.8) (6.3) (7.5) (8.8) (9.3) (11.7) (12.0) (12.8) F.A.S. = free-alongside-ship (a method of export and import valuation whereby the seller’s price includes charges for delivery of goods up to the port of departure). China: U.S. European Union Arms Sales Since the 1989 Embargoes (GAO/T-NSIAD-98-171, Apr. 28, 1998). Agricultural Exports: U.S. Needs a More Integrated Approach to Address Sanitary/Phytosanitary Issues (GAO/NSIAD-98-32, Dec. 11, 1997). Hong Kong’s Reversion to China: Effective Monitoring Critical to Assess U.S. Nonproliferation Risks (GAO/NSIAD-97-149, May 22, 1997). Export Controls: Sensitive Machine Tool Exports to China (GAO/NSIAD-97-4, Nov. 19, 1996). Export Controls: Sale of Telecommunications Equipment to China (GAO/NSIAD-97-5, Nov. 13, 1996). International Trade: Challenges and Opportunities for U.S. Businesses in China (GAO/T-NSIAD-96-214, July 29, 1996). National Security: Impact of China’s Military Modernization in the Pacific Region (GAO/NSIAD-95-84, June 6, 1995). Export Controls: Some Controls Over Missile-Related Technology Exports to China Are Weak (GAO/NSIAD-95-82, Apr. 17, 1995). U.S.-China Trade: Implementation of the 1992 Prison Labor Memorandum of Understanding (GAO/GGD-95-106, Apr. 3, 1995). U.S.-China Trade: Implementation of Agreements on Market Access and Intellectual Property (GAO/GGD-95-61, Jan. 25, 1995). International Trade: U.S. Government Policy Issues Affecting U.S. Business Activities in China (GAO/GGD-94-94, May 4, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO discussed the relationship between China's most-favored nation (MFN) status and World Trade Organization (WTO) membership, focusing on the: (1) WTO accession process; and (2) legal framework affecting China's MFN status, its implications for WTO membership, and the role Congress plays in the process. GAO noted that: (1) China has the largest economy worldwide that is not covered by the WTO; (2) the WTO seeks to promote open and fair international trade through increased transparency, rules, and commitments to reduce barriers on foreign goods and services, and provide a binding system for resolving disputes; (3) China would like to join the WTO and is currently in the negotiation phase, which is the second of the four-stage process for becoming a member; (4) joining the WTO will require China to make substantial changes to its economy; (5) although Congress does not vote on China's WTO membership, the United States Trade Representative is required to consult with Congress before a WTO vote is taken; (6) the Administration plans to ask Congress to enact legislation to resolve a potential conflict between the conditional MFN afforded China under U.S. legislation and the unconditional MFN provided by the WTO agreements; (7) if China becomes a member and Congress has not enacted this legislation, the Administration intends to invoke a WTO provision that would permit the United States not to apply the WTO Agreements to China; (8) an important consequence of taking this exception is that China and the United States would not be obligated to provide each other all the WTO trade commitments that they would give to other WTO member states; and (9) in such a situation, U.S. business may not be able to benefit fully from the commitments China will make to open its markets to other WTO members.
|
The Department of Health and Human Services (HHS) spends about $1 billion a year on programs for improving access to health care for areas with shortages of primary care physicians and health care services. Many of these programs depend heavily upon systems to identify and designate specific areas and populations that are underserved. HHS has two such systems: The Health Professional Shortage Area (HPSA) system identifies underservice caused by a shortage of health professionals, and the Medically Underserved Area (MUA) system more broadly identifies areas and populations not receiving adequate health services for any reason, including provider shortages. About 88 percent of all U.S. counties contain HPSAs, MUAs, or both (see fig. 1.1). A primary care HPSA is an area designated by HHS as having a critical shortage of primary health care providers. These include areas with no providers or areas with an insufficient number of providers to serve the population living there. Designation as a HPSA is generally based on the following: the specified geographic area must be rational for the delivery of health services;the area must have a population-to-provider ratio of at least 3,500 to 1 (or 3,000 to 1 under certain circumstances); and adjoining areas must have provider resources that are overused, more than 30 minutes travel time away, or otherwise inaccessible. In 1994, there were 2,538 primary care HPSAs reporting a need for 5,133.8 full-time-equivalent physicians. About 45 million people reside in these HPSAs. HHS designates primary care HPSAs in one of three ways: a general shortage of providers within a geographic area, such as an entire county or group of census tracts; a shortage of providers willing to treat a specific population group (such as poor people or migrant farmworkers) within a defined area; or a shortage of providers for a public or nonprofit facility such as a prison or hospital. As shown in figure 1.2, most primary care HPSAs are geographically designated. Of the geographic HPSAs, 845 comprised an entire county and 1,107 comprised other types of self-defined geographic service areas. An MUA is an area designated by HHS as having a shortage of health care services. One major difference between an MUA and a HPSA is that underservice in a HPSA is measured primarily as a shortage of health care providers, while underservice in MUAs is measured using other factors as well. Qualification as an MUA is based on four factors of health service need: primary care physician-to-population ratio, infant mortality rate, percentage of the population with incomes below the poverty level, and percentage of the population aged 65 and older. As of June 1995, 1,455 U.S. counties were designated in their entirety as MUAs, and an additional 1,037 counties had at least 1 MUA designated within them. According to HHS officials operating the system, there were about 3,100 MUAs in all. Like HPSAs, MUAs can be designated for all people within a geographic area or can be limited to a particular group of underserved people within the area. Most MUAs have been designated for geographic areas rather than population groups (exact figures are not available). Unlike the HPSA system, however, the MUA system does not allow individual facilities to be designated as underserved. The current HPSA system was developed in 1978 as a means to designate areas for placement of National Health Service Corps (NHSC) providers. NHSC awards students scholarships or loan repayment for medical education and training in exchange for service in areas with critical physician shortages. The types of primary care health professionals that could participate in the program included physicians, nurse practitioners, physician assistants, and certified nurse-midwives. In 1994, NHSC reported having 1,147 physicians and 482 physician assistants, nurse practitioners, and nurse-midwives working in HPSAs. Besides NHSC, two other programs now require that a location be designated as a HPSA to be eligible for participation. Like NHSC, the Community Scholarship Program addresses provider shortages by awarding grants to HPSAs for local scholarships in the health professions. The Medicare Incentive Payment program attempts to ensure that physicians treat Medicare patients by paying a 10-percent bonus on all Medicare billings generated from a practice located in a geographic HPSA. The MUA system was developed about the same time as the HPSA system but independently of it. Authorized by the Health Maintenance Organization Act of 1973, the MUA designation has been applied primarily in identifying areas eligible to participate in the Community Health Center program. This program awards grants for the operation of community health centers and migrant health centers in qualifying areas. In fiscal year 1994, HHS provided support for about 627 grantees providing services at more than 1,600 sites. Centers that serve a designated MUA area or population also are eligible for cost-based reimbursement under the Medicare and Medicaid programs. Another 100 health centers (the so-called “look-alikes”) meet all requirements of the Community Health Center program and receive Medicare and Medicaid cost-based reimbursement, but do not receive Community Health Center grant support. Nearly 30 other programs use the HPSA or MUA systems to some degree, though none rely on a HPSA or MUA designation alone to decide who can apply for federal assistance. For example, nonphysician providers may qualify for cost-based reimbursement under the Rural Health Clinic program if they are located in a state-defined underserved area, HPSA, or MUA. The remainder of programs in this category are health professions education and training programs. Programs under titles VII and VIII of the Public Health Service Act give funding preference to schools that place graduates in medically underserved communities. Other programs, using various designations of underservice, award scholarships or grants for obligated service or training. Together, the various programs that use the HPSA and MUA systems accounted for more than $1 billion in funding and expenditures in fiscal year 1994. Table 1.1 summarizes the various programs and their funding levels. Any person, agency, or community group may request designation of an area, population group, or facility as a HPSA. Copies of each new request are received and reviewed by the Bureau of Primary Health Care’s Division of Shortage Designation (DSD). State representatives from the health department, medical society, and the governor’s office are also asked to review and comment within 30 days. DSD staff then check the application data against national and state sources. They also resolve conflicts among applicants, commenters, and data sources to the extent possible. HHS must by law review annually each designated HPSA to decide if it is still experiencing a shortage of health care providers. DSD does this by giving a list of HPSAs to each state and asking the state to update the information. In addition, Bureau policy requires HPSAs to provide data to DSD every 3 years to support their continued need for the designation. HPSAs not providing these updates are to be proposed for dedesignation in the Federal Register. MUAs are designated on a much different basis. The Department of Health, Education, and Welfare designated the original lists in 1975 and 1976 by applying the four criteria (population-to-physician ratio, infant mortality rate, poverty rate, and percentage of population that is elderly) to all U.S. counties, minor civil divisions, and census tracts. All areas that ranked below the county median combined score for the four criteria were designated as MUAs. MUA designations have been added since then on the basis of newer data and the same cutoff score. Since 1986, HHS has also been able to designate new MUAs under an exception process if requested to do so by a state’s governor on the basis of unusual local conditions. MUAs also differ from HPSAs in that there is no requirement to update the designations regularly. HHS officials managing the MUA and HPSA systems told us that DSD no longer reviews the list of MUAs to decide whether any should be dedesignated. HHS has an effort under way to combine and revise the HPSA and MUA systems. According to HHS officials, this action is being taken to reduce redundancies and differences in the application and administrative processes of the two systems. While HHS officials told us that no changes would be made before 1996, a draft working document says that HHS’ goal is to replace the existing systems with one that is consistent for all primary care programs, has simpler data-gathering requirements, uses relevant indicators of need, and will not disrupt services in existing areas. We reviewed the HPSA system as part of the broad federal effort to improve access to care, and as a follow-up to a congressionally mandated review of the role of federal health education and training programs in achieving this purpose. While separate HPSA systems identify and track provider shortages for primary care, dental care, and mental health care, our review of the HPSA system focused on primary care HPSAs. We chose this focus because the HPSA primary care system is by far the most heavily used for identifying areas eligible for federal funds. We included the MUA system in our review because of current HHS efforts to combine it with the HPSA system. To review the extent to which the HPSA and MUA systems identify areas with primary care shortages, we reviewed past evaluations of the criteria and methodology for designating primary care HPSAs and MUAs and discussed the results with responsible HHS officials, identified the number of primary care providers in HPSAs and compared it to the number reported by the HPSA system, selected a random sample of primary care HPSA applications and reviewed whether designations were appropriate and accurately reflected in the HPSA database, and compared how often the HPSA and MUA data were updated with requirements in the law and HHS policy. To determine the extent that the HPSA and MUA systems provide information needed to target federal funding appropriate to meet the needs of underserved populations, we analyzed the types of designations requested by communities for their underserved populations and determined the extent to which the designations identify who is underserved in each HPSA and the reasons for underservice. To determine whether proposed changes to the HPSA and MUA systems would improve them, we discussed the purpose of the proposed changes with HHS representatives operating the HPSA and MUA systems. We also asked federal, state, and program participants how much the proposed changes would help them identify underserved populations and provide assistance appropriate to meet their needs. Further details of our scope and methodology are presented in appendix I. We did our work from November 1994 through June 1995 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from HHS, but we did not receive them in time for publication. We did discuss an earlier draft of the report with HHS management officials responsible for the HPSA and MUA systems. They made observations about our analysis and findings, and we incorporated their comments in the report where appropriate. Neither the HPSA nor the MUA system reliably measures the extent of shortages in primary health care, providing little assistance to federal programs in directing the $1 billion spent each year for alleviating underservice. We identified two main reasons for the lack of reliability. First, both systems have methodological problems, such as omitting important categories of primary care providers from the calculations. In the HPSA system, for example, these omissions may be overstating the need for additional physicians in shortage areas by 50 percent or more. Second, both systems rely on data that are often inaccurate or outdated. On the basis of these data problems alone, we estimate that about 20 percent of HPSA designations are in error or lack adequate supporting documentation. Although we did not develop such an estimate for MUAs, many MUA designations are very old and may be invalid. For example, about half of the U.S. counties designated as MUAs would no longer qualify for designation if updated using 1990 data. Both systems rely on a population-to-physician ratio in establishing the need for additional primary care providers. The HPSA system bases its shortage determinations on a population-to-primary care physician ratio of 3,500 to 1, which identified a need for 5,134 physicians in shortage areas in 1994. The MUA system, which uses a population-to-primary care physician ratio as one of four factors in its underservice score, is less dependent on the ratio. However, in making their calculations, both systems exclude two categories of primary care physicians already providing services in the shortage areas: NHSC and federal physicians. The systems exclude federally salaried NHSC providers and privately salaried providers who are fulfilling an NHSC service obligation in exchange for health professions scholarships or loan repayment. There were 1,147 such physicians in 1994—the equivalent of about 22 percent of the shortage identified in HPSAs. Other providers employed by federal entities such as the Indian Health Service and the Bureau of Prisons are also excluded. There is no centralized accounting for the total number of federally salaried physicians. U.S.-trained foreign physicians with J-1 visa waivers. Such waivers allow noncitizens who complete their residency training in the United States to remain and practice if they are needed in underserved areas. While the total number of such physicians practicing in underserved areas is unknown, their numbers are substantial. For example, the Appalachian Regional Commission and the Department of Agriculture approved at least 538 J-1 visa waivers for foreign physicians willing to practice in shortage areas in 1993 and 1994 alone. This is equivalent to about 10 percent of the reported shortage of providers in HPSAs. Both systems also exclude several other categories of providers that deliver primary care services: Nonphysician providers. These include nurse practitioners, physician assistants, and nurse-midwives. Comprehensive data on the number of such providers in HPSAs and MUAs are not available. However, NHSC reported having 485 physician assistants, nurse practitioners, and nurse-midwives of its own practicing in HPSAs in 1994. Data provided by health professional associations in 1993 showed at least 369 nonfederal physician assistants and nurse-midwives practicing in HPSAs. In total, these two groups may be the equivalent of between 8 and 17 percent of the shortage reported by the HPSA system. Specialist physicians who provide primary care. Other research shows that specialists such as general surgeons may provide a substantial amount of primary care in areas where the population base is insufficient to support a full-time specialty practice. Further, a 1991 study illustrated that the availability of a full range of specialists to rural communities almost doubles the number of people needed to support a family physician practice from 2,000 to 3,990. In urban areas, the oversupply of physicians in various specialties is reportedly causing them to provide an increasing amount of primary care services. Current data on the extent to which specialist physicians are providing primary care in HPSAs and MUAs are not available. However, our review of a sample of 23 single-county HPSAs showed that most had specialist physicians in addition to primary care physicians providing patient care, averaging 1 physician for every 1,968 people. HHS has various reasons for excluding these categories of health care providers. These reasons were published in the Federal Register in 1980 and 1983 to explain or clarify the HPSA criteria and were confirmed by more recent discussions with HHS officials. HHS’ rationale for excluding NHSC and federal providers is that they probably would not serve in the HPSA without the service obligation or federal employment, and counting them could cause a community to lose its HPSA status. For similar reasons, HHS regulations exclude foreign physicians unless they are permanent residents. Although HHS originally planned to count nonphysician providers as 0.5 of a physician full-time-equivalent in the HPSA population-to-provider ratio, it excluded them from the final methodology because their scope of practice varies by state, and communities using them for care may be penalized in trying to establish a rural health clinic.HHS does not count specialist physicians because HHS believes the law allows it to count only primary care physicians. While we understand HHS’ rationale for excluding these providers from the designation calculations, we do not agree with it for several reasons. Omitting these providers has such a substantial cumulative effect that the true extent of primary care available in underserved areas cannot be determined if they are excluded. If the 1,147 NHSC physicians, 538 J-1 visa waiver physicians, and 854 physician assistants and nurse-midwives mentioned earlier were included in the HPSA calculations, the reported need for additional providers would be reduced by up to 50 percent. If more complete data were available for all provider categories, this percentage could be substantially higher. Excluding primary care providers from the system also makes it difficult for federal and state agencies to coordinate their efforts in addressing underservice. For example, in 1994, NHSC had 19 providers in West Virginia in response to the HPSA system’s reported shortage of 54 primary care physicians there. However, this need for 54 physicians did not reflect the presence of the NHSC providers or that other federal agencies had also assisted in placing 97 foreign physicians in the state’s HPSAs in 1993 and 1994 alone. Finally, understating the number of primary care providers severely limits the usefulness of the system as a screen to identify which communities should be eligible for additional program benefits. For example, NHSC records show that 15 percent of the 576 providers placed in HPSAs in 1994 were in excess of the number needed for dedesignation, while other HPSA vacancies went unfilled. The excess numbers of NHSC providers placed in these HPSAs ranged from one to six. Both systems have other problems with their methodologies that make it difficult to identify and measure underservice. For the HPSA system, an ongoing concern is that it does not assess the extent that existing primary care resources in the community are being used. For the MUA system, although a number of methodological weaknesses were reported in the past, the methodology has not been revised. The HPSA methodology has no mechanism for measuring the extent that existing primary care resources are insufficient to meet the demand for care. HHS officials said that data for this purpose are unavailable using current sources and would exclude the health needs of people who cannot afford to seek care from a provider. However, past studies of the HPSA criteria and methodology have pointed out that such a mechanism is needed because many factors influence the extent that communities use primary care resources at a rate above or below the 3,500-to-1 ratio. For example, one county in our sample was designated as needing 1 additional physician even though the HPSA application showed that 42 of the 80 physicians surveyed within that HPSA were willing to take new patients.Of those willing to accept new patients, over half reported no patient waiting time for appointments—another indication of additional capacity. Assessing the extent that existing primary care services are insufficient to meet the demand for care would also provide a better indication of whether a provider shortage exists in these areas or whether there are other barriers in accessing existing primary care resources. In the previous example, only 15 of the 42 physicians with additional capacity would accept all new patients; the remainder would only accept patients with certain types of health insurance. In HPSAs such as this that appear to have barriers to accessing underutilized capacity, it may be more appropriate to give incentives for expansion of services rather than adding more providers. For example, states are increasingly placing Medicaid patients in managed care in an effort to make these underserved populations more attractive to the existing physician workforce. The MUA methodology has a number of flaws that limit its ability to accurately identify geographic areas and populations that have the greatest shortages of health care services. The methodology—an index of medical underservice—was developed within a short time frame using a process that involved limited empirical testing. Because the developers could not agree on a definition of “medical underservice,” a mathematical model was developed to predict experts’ assessments of service shortages. Subsequent evaluations of the model, however, found little significant difference in the availability of health services between areas that were designated as MUAs and areas that were not. These evaluations, which pointed out other methodological limitations as well, are summarized in appendix II. The MUA designation methodology has remained virtually unchanged since its development, despite improvements in U.S. health status and resources. The methodology uses the same four criteria to determine the MUA index score, and the cutoff score for MUA status (set at or below the median score for all U.S. counties in 1975) remains the same. The only changes to the methodology were made in 1981, when the weights for the infant mortality rate and the population-to-physician ratio were adjusted slightly. In 1986, the law was amended to allow the Secretary of HHS to designate MUAs that score above the cutoff, if the state’s governor recommends designation based on “unusual local conditions which are a barrier to access or availability of personal health services.” Since 1986, about 100 new medically underserved areas and populations have been designated on the basis of this exception process. Numerous problems exist with the accuracy and timeliness of the data used to obtain and maintain HPSA and MUA designations. Many HPSA applications do not contain the data necessary to support the designation, and data in the HPSA application often differ from those in the HPSA database. The reliability of the HPSA system is also compromised by data that have not been updated as required by law and HHS policy. For the MUA system, because it has no requirements for periodic review and updating, little has been done to keep the system’s information current. We estimate that about 380 of the 1,952 geographic HPSA designations were made in error or without adequate supporting documentation. HPSAs qualify for designation on the basis of three main factors: a population-to-primary care physician ratio that equals or exceeds 3,500 to 1, an insufficient number of providers in adjacent areas to provide care, and evidence of being a rational service area. Our review of a random sample of 46 geographic HPSA applications found 17 instances in which data in the file did not support one or more of these three factors. Examples follow: Substantial differences existed in the number of physicians reported by some communities and the number obtained by HHS’s Division of Shortage Designation (DSD) from other sources. DSD verifies the number of physicians reported by applicants against data available from the professional associations. Any discrepancies and their subsequent resolution are required to be documented in the HPSA file. However, unresolved physician counts in some HPSA applications varied by as much as 50 percent. These differences were enough to preclude HPSA designation in each case. For example, one HPSA needing fewer than 8.7 physicians to qualify for designation reported having 7 physicians, while the American Medical Association directory showed 14 physicians practicing in the area. Some HPSA applications did not include data supporting that the number of providers in nearby communities was insufficient to provide care. DSD uses a population-to-physician ratio of 2,000 to 1 in contiguous areas within 30 minutes of travel time from the HPSA population center for this purpose. However, some HPSA files did not show that resources in all contiguous areas were considered. For example, one applicant reported the number of physicians in a town over 30 minutes away, but did not report the number of physicians practicing in a town within 30 minutes travel time. In some HPSA applications, there was no documentation to support the presence of rational service areas for primary care delivery. For example, one single-county HPSA was so large that distances between its population centers exceeded the 30-minute criteria for travel time to care. In another example, two separate service areas asked to be combined and enlarged to maintain the designation for one service area that no longer met the HPSA criteria. DSD was able to provide additional information to support 8 of the 17 designations GAO questioned. DSD officials said it was unclear why five of the remaining nine HPSAs had been designated, and said that they would follow up to resolve the discrepancies and propose dedesignations as necessary. In the other four cases, they provided additional information that we considered but still found to be insufficient to support the designation. We attempted to project the financial impact of federal funding provided to these areas, but because of program data limitations were unable to do so with an acceptable degree of statistical confidence. Another problem is that once the HPSA application data are verified, there is still no assurance that they will be entered or accurately reflected in the HPSA database. Of the 46 HPSA applications in our sample, 14 had discrepancies between the verified data and the data existing in the database for population, physicians, poverty rates, or differences in travel distances or times to the nearest source of care. Although these differences did not seem great enough to cause any of the 14 to lose their designation as a HPSA, some may be great enough to affect eligibility for placement of NHSC scholars or loan repayors. We were unable to determine the effect these data entry errors or omissions had on the NHSC program because, as explained in the next section, HHS sometimes uses data other than those in the HPSA database to prioritize HPSAs for placement of NHSC providers. HPSAs are not being reviewed on a timely basis to determine if they still qualify for federal assistance or should instead be dropped from designation. Federal law requires HPSA designations to be reviewed annually, a task that DSD has delegated to the states. However, DSD annually obtains data from the Bureau of the Census and the National Center for Health Statistics for many of the HPSA fields. DSD uses these current data instead of the older system data to identify which HPSAs have the greatest need for NHSC providers, but it does not use the current data to update its database. DSD officials said they did not use the data to update the database because doing so may cause some HPSAs to become dedesignated. DSD considers it inappropriate to dedesignate a HPSA until it can conduct a complete review of all data submitted by each HPSA during the formal update cycle. DSD’s policy calls for each HPSA to submit an update application to them every 3 years for DSD review and verification that the HPSA designation is still valid. However, DSD is conducting these reviews, at best, every 5 years. Currently, about one-third of the HPSAs have not been updated in more than 3 years and should be updated or deleted, according to DSD policy (see fig. 2.1). DSD officials said they have extended the update period from 3 to 5 years because they have not been able to keep up with the backlog of HPSA applications. However, even the 205 HPSAs that did not reapply for HPSA designation within the past 5 years have not been dropped from the system. Delaying the update process means that HPSA designation is continued for communities no longer requesting it. Communities generally do not request dedesignation when federal assistance is no longer necessary; instead, they simply do not reapply for designation during the update cycle. However, when the designation for these outdated HPSAs is still on the books, federal programs may continue to provide them with resources, perhaps to the detriment of those HPSAs with current designations. The following examples illustrate this problem: NHSC policy is to place providers in HPSAs updated in the last 5 years. However, in 1994, 9 percent of the NHSC providers were placed in HPSAs that had not been updated for 5 years or more. Twenty-three percent were placed in HPSAs that had not been updated in the past 3 years. Under the Medicare Incentive Payment program, Medicare pays bonuses to physicians in all designated HPSAs regardless of when the HPSA was last updated. Although we were not able to determine how much bonus money was paid in HPSAs that had not been updated in the past 3 years, more than $98 million was paid to physicians in HPSAs in 1994, and one-third of all HPSAs were more than 3 years old. There is no required schedule for periodically reviewing and updating MUA data and designations, and even less has been done to keep this system current than for the HPSA system. According to DSD officials, no systematic attempt to update the MUA designations has been made since 1981, when existing designations were reviewed against newer data. They told us that at that time, areas that no longer qualified as MUAs with the newer data were not always dedesignated, however, to avoid disrupting existing community health center services. Community health centers are required to serve MUAs or medically underserved populations to receive federal grant support. Essentially, once an area or population has been designated, it remains designated until the state’s governor requests dedesignation. Since 1990, this has happened only once, when three counties in North Dakota were proposed for dedesignation in 1994. To show what might happen if designations were updated, we compared an application of the MUA methodology to 1990 data for all U.S. counties with DSD’s 1995 list of MUA-designated counties. We found that about 740 counties would qualify as MUAs on the basis of 1990 data, compared to about 1,380 counties that DSD now has designated. Although according to an HHS official there may be other reasons—such as continued eligibility for community health center funding—not to delete some old designations, maintaining the system with such obviously outdated information provides further evidence of the system’s unreliability in identifying medically underserved areas. The HPSA system does not accurately measure the existing capacity of communities to provide primary care services to its populations or the additional number of providers needed for this purpose. Shortages in many communities are overstated because the HPSA criteria do not recognize differences in the types of health care providers used to obtain care, or consider the extent that federal resources are already provided. The system’s reliability is also questionable because HHS has difficulty verifying and updating the HPSA data in a timely manner. Continued reliance on the inaccurate and outdated MUA system likewise has resulted in designations that are not valid indicators of primary health service shortages, or where federal program funding is most needed. The next chapter discusses other aspects of the HPSA and MUA systems that hinder effective targeting of federal resources to the underserved. Even when the HPSA and MUA designations identify needy areas, they generally do not provide the type of information needed by federal programs to target assistance best suited to meet a location’s particular needs. Because most HPSAs are defined as general geographic areas, the designation does not identify the specific part of the population that has difficulty accessing a primary care provider or the underlying reason for this access problem. Similarly, although the MUA system can be used to designate specific underserved populations, most designations encompass everyone within a broad geographic area. As a result, federal programs relying on the designations to identify the type and scope of assistance needed may not provide assistance to those actually underserved in these areas. A case in point is the Medicare Incentive Payment program, which spent over $98 million in 1994 without any assurance that funds were used to improve access for Medicare beneficiaries in geographic HPSAs. Most HPSA designations do not provide information about the HPSA community beyond defining that a shortage of providers exists somewhere within the geographic area. Over three-fourths of all HPSAs in 1994 were geographically designated. Such a designation assumes that everyone within the general geographic area is underserved because the population-to-primary physician ratio exceeds a standard of 3,500 to 1. Only one-fourth of HPSAs were designated for specific types of underserved populations or the facilities that treat them. Unlike geographic designations, these designations provide some indication of the types of access problems that exist in the community. For example, there are seven categories of population-based HPSAs, primarily designated for specific poverty populations such as the homeless or Medicaid-eligible, but which also include designations related to cultural or language barriers experienced by migrant farmworkers or immigrants. Facility HPSAs are primarily used to designate shortages for prison populations but may also include public or nonprofit medical facilities. While designation as a geographic HPSA implies that federal assistance is needed to address access problems for all residents of the HPSA, HHS and state officials agree that specific subpopulations within the area may be those actually at risk. While HHS and state officials believe that underservice may affect entire populations living in areas with no physicians or in remote rural areas, only 12 percent of the underserved populations live in such areas. The remaining underserved populations live in urban areas or rural areas nearby. Access in these areas may more likely be a problem for specific subpopulations, such as the poor. The MUA system has similar problems. By combining the weights for four factors into a single MUA score, the system produces scores that are difficult to interpret and tend to obscure an area’s specific needs. While communities may request designations for specific populations with shortages of health care services, DSD officials told us this medically underserved population (MUP) designation was not used much until the 1980s. Following program amendments in 1986 that permitted state governors to request designations, about half of those new designations have been for underserved populations. While the HPSA system allows designation for various types of underserved populations, there are several disincentives to request them instead of the geographic designation. First, communities with geographic designations can participate in all federal programs, while the programs available to population HPSAs are more limited. For example, the 10-percent bonus on Medicare billings is available to all physicians in a geographic HPSA, but not to those providing care in HPSAs designated on the basis of a poverty population. Second, the application process for population designations takes longer and is more difficult. Population designations require the applicant to conduct a physician survey to determine the proportion of services available to the underserved population and to explain why access to care is a problem. These requirements do not exist for geographic designations, which must only provide a population-to-physician ratio. Finally, individual program requirements for geographic HPSAs are more flexible. For example, in HPSAs designated for poverty populations, 80 percent of the patients treated by an NHSC provider must live below the poverty level, but in a geographic HPSA, NHSC providers can treat anyone living within the defined geographic area. To ensure access to the broadest range of federal assistance, HHS officials encourage communities to use the geographic designations if possible, even when a specific underserved population can be identified. As a result, population HPSAs appear to be designated only as a last resort for communities not meeting the criteria for geographic designation. Our review of HPSA withdrawals and designations made in 1993 also showed that population designations are often used to maintain HPSA designation for areas no longer qualifying on the basis of geography. For example, of the 66 HPSAs that lost their geographic designation in 1993, about a third were redesignated on the same day as population HPSAs. The general nature of most designations does not reflect the need of many federal programs to target assistance to specific populations or circumstances. Over the years, a variety of federal assistance programs have been created to address underservice identified by the HPSA and MUA systems. Initially, these programs served a broad purpose, requiring only that the HPSA and MUA systems designate the geographic areas that required additional providers or services. The NHSC program, for example, placed providers in all types of urban and rural shortage areas, regardless of who was underserved or whether underservice was caused by an undesirable geographic location, an inability to support a physician practice because of sparse or poor populations, or cultural or language differences of migrant farmworkers or immigrants. As new programs were added, they became more specific about the types of populations they served and the scope of assistance they provided. An example is the Medicare Incentive Payment program, which was expected to assist Medicare patients having difficulty obtaining access to a physician because of the low reimbursement rates for primary care services. Another example is the Rural Health Clinic program. Recognizing that many isolated rural communities are unable to support a physician practice, this program provides cost-based Medicare and Medicaid reimbursement to nonphysician providers such as nurse practitioners and physician assistants providing care in these areas without direct physician supervision. However, the HPSA designation system has not been changed to serve the narrowed scope of these programs. This has raised concerns that programs using the geographic designations to determine the type and scope of assistance needed in communities, instead of identifying the specific needs of underserved population within them, may result in misdirecting hundreds of millions of dollars in program resources. This change over time is of less concern with regard to MUA designations, because fewer new programs use them. Moreover, the Community Health Center program, which is the chief user of the MUA system, relies on MUA designations only as a screen for eligibility to apply for funding, according to the program’s Director. Grant funds to support existing and new community health centers are allocated on the basis of reported performance and detailed need criteria within the community. Outdated MUA designations still may be used, however, to certify rural health clinics in areas that no longer have serious health service shortages. When the access problems of specific underserved populations are not identified, it is difficult to determine what kind of federal intervention would be effective—and conversely, to avoid funding “solutions” that do not address the real need. In regard to the MUA system, for example, a study published shortly after its implementation expressed concerns that the methodology did not adequately capture variations in ability to obtain physician services between rural and urban areas, and among populations of different racial and cultural compositions. Consequently, the study concluded that programs using the MUA system could misallocate resources away from those most in need of federal assistance. According to HHS officials operating the HPSA system, they are responsible only for determining whether primary care physician shortages exist. The specific programs using the HPSA system should determine who is underserved in geographic HPSAs and whether their programs are appropriate to address the access problems that exist there. However, to date the programs have relied on the HPSA designations for this purpose and have not developed mechanisms to determine whether their strategies are appropriate for the underserved population in each HPSA. They have not targeted or tailored their programs for individual HPSA needs. Some examples follow. The NHSC program requires that providers placed in HPSAs serve 80 percent of the HPSA population. While designations for the Medicaid or migrant populations require that these specific populations be treated, there is no mechanism to ensure that these same populations would be identified and treated by an NHSC provider in a geographic HPSA. The Medicare Incentive Payment program pays all physicians in geographic HPSAs a 10-percent bonus on Medicare billings even if Medicare patients are not those actually underserved in the HPSA, and even if low Medicare reimbursement rates are not the cause of underservice. The Rural Health Clinic program provides cost-based reimbursement for Medicare and Medicaid services provided in any rural HPSA or MUA, even if the rural health clinic will not accept the entire HPSA population as patients. According to program managers at the Health Care Financing Administration, there is no requirement to distribute rural health clinic services throughout the underserved area or for rural health clinics to accept patients regardless of ability to pay for services. We did not directly audit these programs to determine the extent that program controls were adequate to prevent misdirection of resources for underserved populations in HPSAs and MUAs. However, we did find evidence that such problems exist—especially in the case of the Medicare Incentive Payment program. At present, there is no evidence that the Medicare Incentive Payment program is targeted to improve access to care for Medicare beneficiaries, even though over $98 million was paid to physicians in 1994 for this purpose. Neither the HPSA system nor the program identifies the extent that Medicare beneficiaries are underserved in geographic HPSAs or that low reimbursement rates cause access problems for them. The Medicare Incentive Payment program was established in 1987 subsequent to concerns expressed by the Physician Payment Review Commission that low Medicare reimbursement rates for primary care services may cause access problems for Medicare beneficiaries in rural HPSAs. Under the program, all physicians providing services to Medicare beneficiaries in a rural or urban geographic HPSA are eligible for a 10-percent bonus on Medicare billings. The premise on which this program was created may no longer be valid in that the basis for Medicare reimbursement has changed since 1987. In its 1995 report to the Congress, the Physician Payment Review Commission found no evidence that provider shortages or low Medicare reimbursement rates cause health care access problems for beneficiaries in rural areas. Close to half of the $98 million spent under the program in 1994 was paid to about 82,000 rural physicians. While the Commission found some evidence of a link between living in urban HPSAs and access-to-care problems, beneficiaries cited the cost of services not covered by Medicare and a lack of transportation as the primary causes of access difficulties. These problems are unlikely to be solved by providing a bonus on Medicare billings. The remaining half of the $98 million spent under the program in 1994 was provided to about 96,000 physicians in urban areas. Further, the HHS Inspector General has questioned the appropriateness of applying the program in HPSAs because it provides bonuses to specialist physicians as well as primary care physicians, while the HPSA system only identifies areas with primary care physician shortages. The Inspector General reported that 45 percent ($31 million) of the Medicare incentive payments made in fiscal year 1992 went to specialist physicians who provided little or no primary care. Among primary care physicians, the Inspector General concluded that Medicare incentive payments rarely have a significant effect on their decisions to practice in underserved areas. Bureau of Primary Health Care officials agreed that the HPSA system is not structured to effectively identify areas where the Medicare Incentive Payment program should be implemented. However, they do not believe they should modify the HPSA system for this purpose. Rather than add a designation for underserved Medicare populations, they suggested that the Health Care Financing Administration devise another system. While recognizing that the HPSA system is inappropriate, officials at the Health Care Financing Administration said that use of the HPSA system is mandated by law and that they do not have an alternative system that would effectively allocate funding under this program. While designating HPSAs on a strictly geographic basis may be appropriate for areas with no providers or rural areas remote from other sources of care, such a designation provides limited benefit in targeting assistance in areas where specific subpopulations are at risk. In addition, although the HPSA and MUA systems have criteria that allow communities to specifically designate the types of populations that are underserved in the area, these criteria do not identify the types of populations or access problems that some federal programs are trying to address. HHS encourages communities to maintain broad geographic designations because the designation process is easier and federal programs will provide more benefits to them. However, these broad designations may result in programs misdirecting federal assistance away from those most likely to benefit from it. A prime example is the Medicare Incentive Payment program, which currently has no method to identify situations in which this federal intervention is likely to improve access to care for underserved Medicare beneficiaries. These problems, in conjunction with the methodological and administrative problems discussed in chapter 2, raise questions about the benefits of using the HPSA and MUA systems to identify areas where federal program intervention is needed. Accordingly, the next chapter discusses an alternative to replace the systems with individual program requirements structured to match each program’s strategy to the various needs of underserved communities. While our primary focus in this work was reviewing the HPSA and MUA systems rather than the programs that use them, we believe our findings call for a reexamination of the utility of the Medicare Incentive Payment program. To prevent misdirection of federal program funds, we recommend that the Congress direct the Secretary of HHS to suspend funding for the Medicare Incentive Payment program until HHS can ensure that funding is specifically targeted to Medicare beneficiaries having difficulty accessing a physician because of low Medicare reimbursement rates for primary care services. As currently implemented, the HPSA and MUA systems provide limited benefit to federal programs in identifying those underserved populations that require federal assistance to improve access to primary care. HHS acknowledges that the HPSA and MUA systems have problems and is proposing changes to address some of them. However, the most significant problems will remain. Fixing the systems is not the only option—and perhaps not the best one. The needed improvements may be difficult and costly, and all but one federal program already have their own screening processes in place that may be more easily modified to better match federal resources with the needs of underserved communities. As chapters 2 and 3 have described, the major problems leading to the deficiencies in the HPSA and MUA systems are two-fold. First, the systems contain outdated, inaccurate, and incomplete information. Second, they are based on flawed methodologies that have not been effective at specifically identifying which parts of the population are underserved and why. The Bureau of Primary Health Care is proposing changes to consolidate and streamline the administrative processes of the two systems. But these proposals do little to improve the existing methodologies’ ability to accurately identify areas that need additional health care providers or services. Under the proposed changes, communities would fill out one application form instead of two for both HPSA and MUA designation, and states would take on an increasing role in the designation and update processes. HHS is also considering modifying some of the criteria, which is expected to increase the overall number of designations. For example, HHS may expand the definition of a poverty population from people whose incomes are below 100 percent of the federal poverty level to those whose incomes are below 200 percent, and may add race and ethnicity factors to obtain more designations for disadvantaged populations. These changes to the existing criteria will not significantly affect the underlying methodologies’ tendency to overstate primary care provider shortages or mask underserved populations living within broad geographic designations. For example, the following three aspects of the systems’ operations will not change: HHS plans to continue measuring available primary care capacity with a population-to-primary care physician ratio, without expanding the definition of providers counted or considering differences among communities in utilizing primary care resources. HHS is also maintaining broad geographic designations that do not indicate who in the area is underserved or why designation was requested. The system will continue to overstate primary care physician shortages in areas where federal, state, or regional organizations have been successful in promoting sustainable alternative delivery methods, such as rural health clinics staffed by nurse practitioners and physician assistants. DSD officials acknowledge that their proposed changes will not address many of the problems we identified. They said that many of the improvements in data and methodology would require more time and resources than are currently available at the federal, state, and local levels. We agree that cost is an important consideration—and probably a limitation—in making improvements. Solutions to many of the problems we identified may be time consuming or difficult. For example, the geographic area defined in many applications may be different from the geographic area for which data are available at the national level. Census data, health statistics, and provider information may be readily available for a county, but not for specific areas or populations within the county. In such cases, assessment of primary care needs may require surveys of health providers or populations living in the area. These surveys could prove expensive for the communities to perform and the results difficult for HHS to verify. While some way of screening applicants for federal assistance is necessary, most federal programs already have their own screening processes in place. All but one of the federal programs discussed here have their own criteria and conditions of participation that may be more easily modified to target resources to the underserved. Program officials continue to use the HPSA and MUA systems, in part because they are required by law to do so, but in practice they could rely on their own application processes to match community needs with program resources. The Community Health Center program, for example, requires applicants to demonstrate their target populations’ need for services by providing data on geographic, demographic, and economic factors; available health resources; and population health status. Incorporating similar types of controls in each program could preclude the need for a HPSA or MUA system, and result in better matching of program strategies to individual community needs than currently exists. Here are several examples: The NHSC program uses the HPSA system as an initial screen to identify which areas are eligible to apply for the program. However, facilities or practices within HPSA communities wishing to apply for NHSC providers must fill out additional applications to determine whether they meet the program’s criteria and conditions of participation. These applications do not currently require the applicants to show that they have unsuccessfully tried to recruit a provider to treat a specific underserved population; however, the program requirements could be modified to include this information. The Rural Health Clinic program also relies on the HPSA and MUA systems only as a screen for basic eligibility. The program has its own application process and conditions of participation that must be met after designation is obtained. Program applications currently do not require evidence showing that cost-based reimbursement from Medicare and Medicaid is needed to sustain a clinic, nor do they require the applicant to accept all underserved people as patients. However, these requirements could be modified to do so. The health professions education and training programs use the HPSA and MUA systems as only two of several criteria in assessing whether some applicants should receive preference or priority for federal grants or scholarships over others. These program applications could be modified, if necessary, to include evidence on the extent to which the applicants have been successful in addressing underservice. The Medicare Incentive Payment program is the only program that uses the HPSA designation as the sole criterion for obtaining federal benefits. However, as discussed in chapter 3, we question whether using the HPSA system is appropriate for this program. The HPSA system does not have a designation category for underserved Medicare beneficiaries, nor does it identify them within broad geographic designations. DSD and program managers acknowledge that the systems provide only limited benefit to federal programs and are not really needed for them. However, they believe that maintaining a national system is needed for developing planning documents and monitoring primary care access. While we agree, HHS already has another effort under way that may serve this purpose. Using statewide primary care cooperative agreements, HHS provides funding to all 50 states, the District of Columbia, and Puerto Rico for the development and coordination of comprehensive primary health care services in areas lacking adequate numbers of health care professionals or services. Under these agreements, state representatives are responsible for (1) participating in the development of statewide efforts to coordinate and implement primary care delivery systems, (2) identifying special underserved populations and the types of programs appropriate to incorporate these populations into the primary health care system, and (3) integrating federal assistance and health care delivery programs with existing local and state resources. Primary care access plans developed by each state have the potential to provide a more comprehensive and less duplicative way of gathering needed information about the type and scope of programs needed in each community. Aggregating this information at the national level may help in allocating increasingly scarce resources to those programs that are most needed in underserved communities. HHS is proposing changes in the HPSA and MUA systems, but these changes will not result in improving the methodologies’ ability to reliably identify shortages of primary care providers and services. In our view, the costs to make the types of improvements needed in the designation systems are not worth the time and benefit of doing so. For program purposes, it would be easier to incorporate the appropriate screening requirements into the existing conditions of participation for each program. For other purposes, HHS could explore using information collected under the primary care cooperative agreements to prevent duplication of effort in establishing a national primary care monitoring system. To assist underserved populations in accessing federal program resources most appropriate for their needs, and to enable HHS in targeting its resources more specifically to them, we recommend that the Congress remove legislative requirements for HPSA or MUA designation as a condition of participation in federal programs. Instead, the Congress should direct the Secretary of HHS to incorporate the necessary screening requirements into the conditions of participation of each program that will best match the type of program strategy with the type of access barrier existing for specific underserved populations.
|
GAO reviewed the Department of Health and Human Services' (HHS) systems for identifying geographical areas where access to medical care is limited, focusing on: (1) how well the systems identify areas with primary care shortages; (2) how well the systems target federal funding to the underserved; and (3) whether the HHS proposal to combine the systems would lead to improvements. GAO found that: (1) the two HHS systems do not reliably identify areas with primary care shortages or help target federal resources to the underserved; (2) the systems have widespread data and methodology problems which severely limit their ability to pinpoint needy areas; (3) both systems tend to overstate the need for additional primary care providers because they do not consider all of the categories of providers already in place; (4) the Health Professional Shortage Area System (HPSA) does not consider the extent to which available resources are being used; (5) the Medically Underserved Area System (MUA) is limited in its ability to identify underserved areas and populations; (6) neither system identifies the specific subpopulations that have difficulty obtaining medical care; (7) while the systems can sometimes accurately identify needy areas, they do not provide the necessary data to determine which programs are best suited to those areas; (8) the proposed consolidation and streamlining of the systems is not likely to solve system problems, since the underlying causes of the problems have not been addressed; (9) it may be more cost-effective to modify individual programs and application processes to identify where needs exist and the appropriate program to meet those needs and to target resources better; and (10) HHS officials believe that they need to maintain a national shortage designation system to monitor primary care access, but HHS has another initiative under way that could serve those purposes.
|
Since 1955, the executive branch has encouraged federal agencies to obtain commercially available goods and services from the private sector when the agencies determined that such action was cost-effective. The Office of Management and Budget formalized the policy in Circular A-76, issued in 1966. Later, it issued a supplemental handbook that provided the procedures for competitively determining whether commercial activities should be performed in house, by another federal agency through interservice support agreements, or by the private sector. In general, the competition process involves the government describing work to be performed, such as aircraft maintenance or base operating support, in a performance work statement and soliciting private sector offers. The government also prepares an in-house cost estimate to perform the same work based on its most efficient organization. The government estimate is then compared to the selected offer from the private sector to determine who will perform the function. We have previously reported that A-76 studies can produce cost reductions whether the competitions are won by the public or the private sector. Cost reductions result from efforts to achieve more efficient organizations. At the same time, we have also noted some limitations in (1) the preciseness of the Department of Defense’s savings estimates from A-76 studies due to such factors as the need to offset up-front investment costs associated with conducting the studies and implementing the results and (2) baseline operating costs against which savings are calculated. When contractor performance is chosen, wages and benefits are often governed by the Service Contract Act for services or the Davis Bacon Act for construction, which prescribes minimum pay and benefits for contractor employees under government contracts. The Service Contract Act of 1965 provides that pay and benefit levels be established by the Department of Labor for certain contractor employees, including, for example, office clerks and aircraft maintenance workers. The Davis Bacon Act of 1931 provides for the establishment of minimum pay and benefit levels, again to be set by the Department of Labor, for most construction skill classifications. Some employees, however, may be covered by existing collective bargaining agreements. The Department of Labor is responsible for administering the provisions of these acts. It periodically establishes and reassesses pay and benefit levels based on a survey of the wages offered by the private sector in local geographic areas by skill classification. It also establishes flat hourly rates for skill classifications in different geographical areas based on the median level of pay for those job classifications in those areas. The Department of Labor periodically reassesses the rates and publishes updates when the median levels change. In terms of other benefits, the Service Contract Act and the Davis Bacon Act require contractors to provide a minimum level of benefits. Contractors, however, have flexibility in the types of benefits they provide. For example, they could provide a combination of benefits, such as retirement, and health and life insurance, as long as benefits meet or exceed the minimum level. They also could allow employees to place some or all the benefit amount in a 401(k) plan or receive the benefit in cash. Our analysis and discussions with government and private sector officials that are involved in A-76 competitions continue to affirm, as we previously reported, that most estimated cost reductions from competitions are related to reduced personnel cost estimates—mostly a reduction in personnel requirements. Government and contractor officials told us they use a variety of techniques to minimize the number of personnel needed to perform a required function. These techniques include limiting proposed activities to the streamlined requirements detailed in the performance work statement, substituting civilian for military workers, designing a new work process, multiskilling (employees performing more than one skill), and proposing modern methods and equipment to complete the tasks. Contractor and defense officials stated that personnel reductions are key to achieving reduced costs from A-76 competitions. For example, we previously reported on an A-76 competition involving aircraft maintenance at the Altus Air Force Base, Oklahoma, where the Air Force estimated cost reductions of $20 million annually. In initiating the study, the Air Force planned to convert its largely military workforce to civilian personnel, either government employees or employees of a contractor, depending on the results of the A-76 study. Either way, civilian workers were expected to be less costly. The organization in place before the study had 1,444 authorized positions: 1,401 were military positions. After the study, the selected government organization had 735 positions—all civilians, almost a 50-percent reduction. A performance work statement serves as the basis for determining personnel requirements for both government estimates and private sector offers. Because labor represents the predominant costs in an A-76 study, both the in-house government organization and the contractors develop work strategies that enable them to perform the work requirements with the minimum number of people. They also develop their personnel requirements by determining the skill classifications needed and the number of staff hours per classification to complete tasks over a time period. For example, if they had an average need for 240 hours of plumbing tasks each 40-hour week, then they would need six people with plumbing skills. Contractors told us they use the least costly skill classification and multiskill and multirole employees to complete the required tasks. For simple plumbing, electrical, and carpentry tasks, a less costly maintenance worker classification could be used, not a fully qualified electrician, for example. Contractors also have the flexibility to use temporary or seasonal workers to meet periodic workload needs, pay overtime, or pay higher wages to workers in a lower pay classification to temporarily perform tasks performed by employees in a higher classification. The government can use many of these techniques, but it may need to obtain the cooperation of employee groups and obtain waivers to personnel procedures. The impact on employment, pay, and benefits of individual employees affected by A-76 studies varies depending on factors such as the results of the competitions, the availability of other government jobs, and other more individual factors such as retirement eligibility. Pay may also be affected by the location and technical nature of the work. These factors make it difficult to draw universal conclusions about the effects of A-76 decisions on affected federal employees’ employment options, pay, and benefits. Our analysis of the results of three A-76 case studies, one that remained a government activity and two won by the private sector, illustrates how federal employees may be affected. The three studies show that about half of the civilian government employees remained in federal service, either in the new or another government organization, with similar pay and benefits, and most of the remaining employees received a cash incentive of up to $25,000 to retire or separate. A small number of employees were involuntarily separated. Further, we found that all employees that applied for positions with winning contractors were hired. Types of benefits provided by the contractors, such as health insurance, vacation time, and savings plans, appeared to be similar to those offered by the government. Results of these three cases are highlighted below and further summarized in appendix I. The results are not projectable to the universe of employee actions resulting from A-76 studies, but they do illustrate estimates of a range of effects that may occur. Federal employees’ employment, pay, and benefits may be adversely affected even when the in-house organization wins an A-76 competition because the new in-house organization typically restructures the work and reduces the number of employees required to perform the work. Employees may be faced with positions being downgraded or even eliminated. However, the ultimate impact on pay and benefits of affected employees varies, depending on factors such as availability of other federal positions, retirement eligibility, or use of “save pay” provisions associated with exercising employment rights under federal personnel reduction- in-force rules. In establishing the new in-house organization, a reduction-in-force usually occurs. To minimize disruptions that can occur as the result of a reduction- in-force, the Department of Defense offers eligible employees a cash incentive, up to $25,000, to retire or voluntarily separate. According to reduction-in-force procedures, a government employee that accepts a lower graded position is eligible to retain his/her former grade and pay for 2 years. At the end of the 2-year period, if the employee remains in the same position, his/her grade may be lowered, but his/her current pay is not lowered, although future pay raises may be limited. Employees who do not obtain positions in the new organization have priority for placement in other jobs within the Department of Defense for which they are qualified. One case study at Wright Patterson Air Force Base, Ohio, involved studying 623 positions—428 civilian and 195 military. The in-house organization won the competition and the number of positions was reduced to 345 civilian positions and the military personnel were reassigned to other duties. In this case, 83 full-time civilian positions were eliminated. Of the employees in the positions eliminated, 28 obtained other government positions, 53 chose voluntary retirement, and 2 were involuntarily separated. Of the 345 employees authorized for the new organization, 310 came from the previous organization. Available information indicated that among those employees, 52 percent experienced a reduction in grade, 31 percent remained at the same grade level, 1 percent obtained a higher grade level, and 15 percent changed wage systems, making it difficult for us to determine the impact on their grade level. As mentioned previously, those employees who had reductions in grade may not experience a decrease in actual pay or benefits. When a contractor wins a competition, generally positions associated with the in-house organization are eliminated through a reduction-in-force and all government civilian workers in the former activity must evaluate their options. In general, this means they must obtain other government employment, retire, or separate, as discussed previously. Employees who retire or separate may also have the option of working for the contractor. Separated employees have right-of-first refusal for employment with winning contractors for positions for which they are qualified. In the two studies we examined where contractors won, approximately two-thirds of the affected civilian employees accepted a cash incentive to voluntarily retire or separate. About 25 percent obtained other government jobs and generally retained their same pay and benefit levels. The remaining employees, less than 10 percent, were involuntarily separated. Some of the retired and separated workers applied for a job with the contractors and, according to the contractors’ officials, all were hired. Contractors we spoke with indicated that they actively recruit displaced and retired workers because they do not usually have a readily available workforce in place to staff the new organization. Further, they stated that they want to hire as many former government employees as possible because it gives them an experienced workforce and also lowers their recruiting, hiring, and training costs compared to hiring an external workforce. We were told that all former separated or retired employees who applied with these two contractors were hired. However, not all separated or retired government employees sought employment with the contractors. One contractor told us approximately 60 percent of the staff it hired were former civilian or military employees. Another contractor reported hiring about 20 percent. Employees that go to work for a contractor may have a different salary than what they had with the government, which could be higher or lower. Salaries and benefits for most employees that provide services on government contracts are based on the pay and benefit wage scales established pursuant to the Service Contract Act. Contractors we spoke with said they must minimize labor costs to win the competition—first against other competing contractors and then against the in-house government estimate. Therefore, they do not typically submit offers with higher pay and benefit levels than the minimum established by the Department of Labor under the Service Contract Act. Thus, while the contract labor rates can differ either positively or negatively for a former government employee, for covered positions, the contractor must pay wages that prevail in a given geographic area. In the two contractor case studies, information was not readily available to identify precise changes in pay for the former government employees who had accepted employment with the winning contractors. In general, for the one contractor where we analyzed pay changes, we identified instances where pay rates were less and in other instances where pay rates were more than before. However, the precise difference was not always clear because of limited information on employees’ previous salaries. In many instances, these former government employees received a cash incentive to leave government service and were also receiving federal retirement benefits. In terms of benefits, table 1 shows that the government and contractor employees in our case studies were provided many of the same types of benefits. However, comparing actual benefits was not possible because data were not readily available on benefit amounts for individual employees. In addition, some government employee benefits were calculated based on length of service (such as vacation time) and pay (such as contributions to the federal retirement savings plan), and participation in many of the benefits is voluntary (such as health insurance, life insurance, and the Thrift Savings Plan). Some of the contractors we spoke with also offered sick or personal leave and some contractors added additional paid vacation time for length of service. In oral comments on a draft of this report, on February 23, 2001, the Deputy Under Secretary of Defense for Installations concurred with the report’s findings. Technical comments were also provided and were incorporated as appropriate. To determine how A-76 competitions have reduced estimated costs, we relied on our review of A-76 savings and a separate review of the status of the Department of Defense’s competitive sourcing program. We also discussed this issue with contractor representatives who had participated in A-76 competitions. To determine the impact of A-76 competitions on employment, and on estimated pay and benefits, we judgmentally selected two studies at Wright Patterson Air Force Base, Ohio (one where contractors were selected and one where the in-house government organization was selected) and one study that contractors won at Tyndall Air Force Base, Florida. These studies were selected because they involved a large number of positions, resulted in both in-house and contractor decisions, and were completed within the past few years to better ensure availability of data. We sought to determine the employment options for employees involved in these three studies as well as the impact on their estimated pay and benefits through interviews and analyses of documentation from defense contracting and personnel officials and contractor representatives. We also interviewed 79 of the 82 former civilian or military government employees that went to work for the contractor that won the Tyndall civil engineering segment of the competition. However, our analyses were constrained due to limited available data concerning individual former employees. For example, base personnel offices did not track what happened to personnel affected by a specific A-76 study and could not provide us with exact government salary information for employees that went to work for contractors. We did not independently verify the pay and benefit estimates provided by the government or the contractors. Therefore, we could only estimate the impact on pay and benefits for these three cases. The results are not projectable to the universe of employee actions resulting from A-76 studies, but they do illustrate estimates of a range of effects that may occur. We conducted our review from August 2000 through January 2001 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable Mitchell E. Daniels, Director, Office of Management and Budget; and other interested congressional committees. We will make copies of this letter available to others upon request. If you or your staff have any questions regarding this letter, please contact me at (202) 512-5581. Key contributors to this assignment were Cheryl Andrew, Margaret Morgan, Thad Rytel, and Marilyn Wasleski. The following provides key information surrounding three A-76 case studies discussed in this letter. In general, the studies indicate that savings result from personnel reductions, whether the government or contractor organization is selected. Displaced workers must either obtain other government employment, retire, or separate. The studies showed that about half of the civilian government employees remained in federal service, either in the new or another government organization. Relatively few workers were involuntarily separated and those employees who retired or separated and wanted to work for the contractor were hired. Our efforts to compare precise pay and benefits before and after the competitions were hampered by not being able to obtain actual pay and benefit data and the differences in pay systems and job responsibilities. However, we were able to determine that government employees retained in government positions had virtually no change in pay and benefits because of pay protection provisions. Most workers employed by contractors had their pay and benefits determined under the Service Contract Act, which requires pay comparability to the local area and a minimum level of benefits as determined by the Department of Labor. Overview: In August 1997, Wright Patterson Air Force Base announced its plans to conduct an A-76 study of 623 positions—428 civilian and 195 military—associated with civil engineering. Personnel involved in this activity were responsible for general building and grounds maintenance such as plumbing, painting, electrical, and carpentry work. The in-house most efficient organization won the competition and as a result, the Air Force expects to save an estimated $97 million over 6 years. Impact on employment: The new in-house organization, which was implemented in October 2000, consists of 345 positions, representing a reduction of about 45 percent in the number of positions previously associated with the activity. Figure 1 shows what happened to the civilian personnel involved with this study. Of the positions filled, almost 90 percent of the personnel in the new civil engineering organization (310 employees) were previously assigned to the former organization and almost all of the remaining employees came from other organizations at the base. A majority of the positions that were eliminated were military positions. The military personnel were reassigned to other activities. Eighty-three full-time civilian positions were eliminated. Of the employees in positions eliminated, 53 people retired and received a $25,000 separation incentive along with their pension; 28 people found other government positions. Only two permanent employees were involuntarily separated and received severance pay based on their years of service. Impact on pay and benefits: The study results were implemented primarily through reduction-in-force procedures. With the exception of a few employees that experienced a salary increase by obtaining a higher grade level, employees in the new government organization or those that found other government jobs kept the same salary and benefits they had when working in the previous activity. For the 310 employees in the new government organization that were previously assigned to the organization: 52 percent experienced a reduction in grade, 31 percent remained at the same grade level, 1 percent obtained a higher grade level, and 15 percent changed wage systems, making it difficult for us to determine the impact on their grade level. However, according to reduction-in-force procedures, employees that experience a reduction in grade are eligible to retain their former grade and pay for 2 years. While at the end of the 2-year period, the employee’s grade is lowered, the current pay would not be lowered; however, future pay raises could be limited. Overview: In May 1996, Wright Patterson Air Force Base announced its intent to conduct an A-76 study on 499 positions—411 civilian and 88 military—associated with base operating support activities. These activities included base supply, transportation, maintenance, a laboratory, and a laboratory supply function. In 1998, the base awarded firm fixed-price plus award fee contracts to two separate contractors. The Air Force estimates the study will result in $14 million in annual savings, for a total of almost $58 million over a 49-month contract period. Impact on employment: As a result of the contractor win, 411 civilian positions and all 88 military positions were eliminated through a reduction- in-force, with the military personnel being reassigned to other activities. Figure 2 shows what happened to civilian employees as a result of this competition. Most of the civilian personnel whose positions were eliminated (about 75 percent) retired or voluntarily separated. These employees received a separation incentive of up to $25,000 and in the case of those that retired, a retirement pension. Fifty-eight other civilians (14 percent) found other government jobs with the Air Force and generally retained their same salary and benefit level. Another 47 people were involuntarily separated and received separation pay based on their years of service. According to the contracting official in charge of overseeing the contract, 46 of the affected government employees came to work for the contractor. Almost 60 percent of these employees retired from government service. Impact on pay and benefits: The impact on pay and benefits of each employee varied depending on whether the displaced employee obtained another government position, retired, separated, or went to work for the contractor. Based upon discussions with Wright Patterson officials and our review of federal reduction-in-force procedures, we determined that those employees who found other government jobs generally maintained the same pay and benefit levels they had prior to losing their base operating support activity position. We could not, however, precisely determine the change in salary for each individual employee that went to work for the one contractor that hired former government employees because Wright Patterson officials could not provide us with their actual salary information. Instead, they gave us the grade level each employee had achieved before leaving government service, but not the step within that grade level. To the extent the employees may have been at the bottom steps of their pay grades, our analysis indicated that about 60 percent of these employees would likely have received less pay from the contractor than the government. The higher the steps within the grade, more employees would have received less pay under the contractor. However, at the same time, a majority of these employees accepted employment with the contractor after retiring from federal service and augmented their contractor pay with their retirement annuities. Contractor employees receive a minimum of $1.92 per hour, or about $4,000 per year in benefits. Employees can use this amount to pay for health benefits, invest the money in a company 401(k) plan, or take an additional cash payment in their check. Employees are offered a variety of benefits, including medical, dental, vision, short- and long-term disability, and life insurance. They also receive paid vacations, holidays, and sick time. Overview: In December 1994, Tyndall Air Force Base announced its intent to conduct an A-76 study on 1,068 positions—272 civilian and 796 military—associated with aircraft maintenance and base operating support. Three contractors won the competitions and in October 1997, the base awarded fixed-price incentive fee contracts for its base operating support activity and aircraft maintenance support. Air Force officials estimated at the time of award that this A-76 study would save about $19 million over a 5-year period compared to the previous cost of the activity. Impact on employment: As a result of the contractors’ win, 262 civilian positions and 796 military positions were eliminated, with the military personnel being reassigned to other duties. To implement the study results, a reduction-in-force occurred. Figure 3 shows what happened to civilian employees as a result of this competition. About half of the civilian personnel either retired (39 percent) or voluntarily separated (13 percent). These employees received a separation incentive of up to $25,000 and in the case of those that retired, a retirement pension. Forty-four percent of the civilian employees found another government job and retained their same salary and benefit levels. About 5 percent were involuntarily separated and received separation pay based on their years of service. Impact on pay and benefits: We could not determine the change in salary of each employee that went to work for the contractors because Tyndall personnel officials did not have actual salary information accessible. We focused our review on only the employees affected by the civil engineering contract and interviewed 79 of the 82 former civilian or military government employees who went to work for the contractor. Based upon our interviews, many had retired from the military or civil service. The consensus of the employees was that they were being paid less for similar duties than when they were working for the government. However, for the retired government and military workers, the combination of retiree pay and benefits together with the pay and benefits from the contractor was usually greater than what they had previously received. Further, they indicated that the contractor’s wages were greater than what they could get working in similar duties in the local private sector. The contracting official in charge of overseeing the contract said positions were offered to all former government employees that applied and, as of January 2001, none had been let go. The contracting official further told us that these employees receive a minimum of $2.56 per hour, or about $5,300 a year, that they can apply to various health benefits and a 401(k) plan. Employees are offered a variety of benefits, including medical, dental, vision, 401(k), short- and long-term disability, and life insurance. They also receive paid vacations and holidays, but paid sick time is not provided. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
|
Office of Management and Budget Circular A-76 competitions have reduced the estimated costs of Defense Department activities primarily through reducing the number of positions needed to perform activities being studied. The impact on employment, pay, and benefits of individual employees affected by A-76 studies varies depending on factors such as the results of the competitions, the availability of other government jobs, and other individual factors such as retirement eligibility. Pay may also be affected by the location and technical nature of the work. These factors make it difficult to draw universal conclusions about the effects of A-76 decisions on affected federal employees; employment options, pay, and benefits. GAO's analysis of three completed A-76 studies showed that about half of the civilian government employees remained in federal service following the studies, either in the new or another government organization with similar pay and benefits. There were relatively few involuntary separations.
|
The Environmental Protection Agency (EPA) administers the federal program for ensuring the cleanup of abandoned hazardous waste sites that pose significant risks to public health and the environment. EPA may compel parties responsible for the contamination to conduct or pay for these cleanups. EPA manages cleanups for a portion of these hazardous sites through the Superfund program. Other federal agencies clean up sites on their lands and can also compel parties responsible for the contamination to conduct or pay for these cleanups. States generally manage cleanups at sites that are not addressed in the Superfund program. Estimates predict that the nation’s total investment in cleanups will exceed hundreds of billions of dollars. This report assesses federal agencies’ progress in solving several problems that hinder their ability to protect this investment. The Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980, as amended, governs cleanups of both federal and nonfederal hazardous waste sites. The program was originally authorized for 5 years and has been reauthorized twice, in 1986 and 1990. EPA evaluates contaminated sites and places those that qualify for long-term cleanup on its National Priorities List. EPA may either order parties responsible for the contamination to perform cleanups or clean up sites itself and seek reimbursement from the responsible parties. EPA relies heavily on private contractors to perform or manage cleanup activities. CERCLA also established a trust fund (the Superfund trust fund) to pay for cleanups and related activities, financed primarily by taxes on crude oil and chemicals. The program’s authorization, and the taxes financing the fund, expired in 1995. The Congress continues to fund the program through annual appropriations from the Superfund trust fund and general revenues. The federal government faces an even greater potential cleanup investment than EPA alone. Federal agencies must report potential hazardous waste sites on lands that they administer to EPA. Agencies clean them up using funds from their own appropriations. The agencies potentially responsible for the most cleanups are the departments of Agriculture, Defense, Energy, and the Interior. As of September 1998, EPA had included 2,104 of these agencies’ facilities on the federal facility docket—the list of federally owned facilities that EPA is to consider for placement on the National Priorities List—and had included a total of 173 of these facilities on the National Priorities List. (See table 1.1.) For a federal facility on the National Priorities List, EPA enters into an interagency agreement under which the responsible federal agency cleans up the facility. The agreement establishes penalties for failure to comply with the schedule or terms of the cleanup. In 1995, a group of representatives from federal agencies responsible for cleanups under the Superfund program, the Federal Facilities Policy Group, estimated that the total cost of cleaning up these federal facilities ranges from $234 billion to more than $300 billion over a 75-year period.For fiscal years 1991 through 1999, the Congress appropriated to Agriculture, Defense, Energy, and the Interior—the four agencies included in our review— a total of almost $33 billion for hazardous waste cleanups. Once a site has been identified, EPA includes it in the database it uses to track hazardous waste sites, known as the Comprehensive Environmental Response, Compensation, and Liability Information System (CERCLIS). The next step in EPA’s cleanup process is to assess the site to determine whether the contamination poses a large enough health or environmental risk to qualify for a long-term cleanup under the Superfund program. (See fig. 1.1.) For nonfederal sites, EPA or the state in which the potentially contaminated site is located conducts the assessment. The responsible federal agencies assess their own sites that are on the federal facility docket. EPA then uses the data from these assessments to calculate a site’s potential risks by using its hazard ranking system. This system assesses potential risks to humans and sensitive environments, such as wetlands, from exposure to contamination at the site through four “pathways”— soil, groundwater, surface water, and air. Each site receives a score ranging from 0 to 100, and sites that score above 28.5 in this system are eligible to be considered for placement on EPA’s National Priorities List. Only sites on this list may receive long-term cleanups financed by the Superfund trust fund. EPA, typically with a state’s concurrence, proposes that an eligible site be placed on the agency’s National Priorities List. Once EPA places the site on the list, it generally receives a more extensive investigation of the risks it poses and an evaluation of alternative cleanup methods to address these risks. After one or more cleanup methods are selected, the cleanup is designed and implemented, either by EPA or by the responsible parties under EPA’s oversight. (See fig. 1.2.) Once the cleanup is completed and EPA considers that the site no longer poses a risk to human health or the environment, EPA may remove the site from the National Priorities List and delete it from CERCLIS. EPA’s Office of Solid Waste and Emergency Response (OSWER) administers the Superfund program, setting its policy and direction through the Office of Emergency and Remedial Response. However, EPA’s 10 regional offices award contracts for the cleanups in their jurisdiction that the agency has decided to fund, manage cleanup activities at these sites, monitor private parties’ and federal agencies’ cleanups, and determine when to propose new sites for the program or delete completed sites. In 1990, we identified a group of federal programs that could pose a significant risk of waste, fraud, abuse, and mismanagement, as well as a significant financial risk to the government. We included the Superfund program in this group because of the anticipated large federal investment and the extensive use of contractors to implement the program. In 1992, we reported on key problems with the Superfund program and actions EPA should take to decrease this risk. Specifically, we reported on the need (1) for EPA and other federal agencies to give greater consideration to the relative risks of sites when setting priorities for using their limited cleanup funds; (2) for EPA to improve its limited recovery of cleanup costs from responsible parties; and (3) for EPA to correct poor contract management practices and inadequate controls over contractors’ costs. Since we issued our initial report in 1992, we have reviewed the agencies’ progress in addressing these issues every 2 years. In 1997, we reported that (1) several agencies had begun to implement systems that consider the relative risks of sites when allocating cleanup funds, while other agencies had not; (2) EPA had not resolved the cost recovery problems we had identified; and (3) EPA still had to improve its use of independent estimates to set the best contract prices for cleanups, its ability to control contractors’ high program management costs, and its efforts to reduce a significant backlog of Superfund contract audits. Given EPA’s and other federal agencies’ uneven progress in responding to the concerns about the Superfund program’s management that we raised in our prior work, we initiated this review to determine whether the agencies had now more fully addressed these concerns and, therefore, reduced the government’s financial risks. Specifically, we wanted to assess (1) the efforts that EPA and other federal agencies with major cleanup responsibilities have made to set priorities for spending limited cleanup funds at the sites posing the highest risks; (2) EPA’s actions to recover its expenditures for cleanups from the parties that caused the contamination, and (3) EPA’s efforts to better control contractors’ cleanup costs. To respond to the first objective, we conducted interviews with EPA site assessment managers in 4 of EPA’s 10 regions with the largest number of sites that are awaiting consideration for the National Priorities List or have already been listed – regions I (Boston), II (New York), IV (Atlanta), and V (Chicago) – and the director, deputy director, and staff of the State, Tribal, and Site Identification Center within OSWER to understand the agency’s approach to assessing and listing sites. We also interviewed the chair and 4 of 10 regional representatives on EPA’s National Prioritization Panel, which assigns nationwide priorities for all sites that are on the National Priorities List and are ready to construct the cleanup. We obtained and reviewed documents that describe the criteria and weights the panel uses to score and rank sites. In addition, we examined the panel’s funding decisions for fiscal year 1997, confirming that they were based on the panel’s ranking. To understand EPA’s responsibilities and overall approach to federal facility cleanups, we met with the associate director of OSWER’s Federal Facilities Restoration and Reuse Office and interviewed remedial managers who oversee federal facility cleanups in regions IV (Atlanta), V (Chicago), VIII (Denver), IX (San Francisco), and X (Seattle). We also conducted interviews with environmental cleanup and budget officials at the headquarters of the departments of Agriculture, Defense, Energy, and the Interior. As necessary, we visited regional offices to test how field offices implemented these relative risk policies and used relative risk to make cleanup funding decisions. For the second objective—assessing EPA’s cost recovery program—we interviewed and obtained data from cost recovery program managers in EPA headquarters and two regional offices. In EPA headquarters, we met with the director of the Policy and Program Evaluation Division in the Office of Site Remediation and Enforcement, Office of Enforcement and Compliance Assurance, as well as the chief of the Program and Cost Accounting Branch in the Financial Management Division, Office of the Comptroller, Office of the Chief Financial Officer. We reviewed EPA’s proposed methodology on developing a new indirect cost rate to charge to responsible parties to identify changes from the previous method. In addition, we analyzed EPA’s 1999 annual plan for the Government Performance and Results Act to determine the status of EPA’s goals and performance measures. We also spoke with EPA enforcement, cost recovery, and legal staff in regions IV (Atlanta) and V (Chicago), which we selected because of their unique and large recovery efforts, respectively. For the last objective—assessing EPA’s management of Superfund contracts—we conducted work at EPA headquarters and three EPA regions. At headquarters, we met with Superfund program managers in OSWER, including the deputy director, Office of Emergency and Remedial Response and the director and senior managers, Office of Acquisition Management, to understand EPA’s contracting policies and procedures. We also met with Superfund program and contracting managers in regions III (Philadelphia) and VII (Kansas City), because their Superfund contracts had been in place for the longest time, and in Region V (Chicago), because we had selected this region in our last review. To test the quality and use of independent government cost estimates to set contract prices, we conducted a detailed analysis of a total of 35 Superfund contract work assignments initiated in the three EPA regions from January 1, 1997, through September 30, 1997. We used this time frame because it was similar to the time frame in our last review and would serve as a basis for comparison. We also visited the U.S. Army Corps of Engineers in Washington, D.C., to compare its cost-estimating practices with EPA’s. In addition, we visited a private Superfund contractor in Region III to get a general understanding of how contractors estimate costs for Superfund cleanup activities. We also met with EPA’s Office of the Inspector General in Washington, D.C, and officials from the Defense Contract Audit Agency (DCAA) in Fort Belvoir, Virginia. For a more detailed description of our audit’s scope and methodology, see appendix I. We conducted our work from May 1998 through April 1999 in accordance with generally accepted government auditing standards. EPA has made progress over the years in responding to our concerns that it was not effectively using its limited cleanup dollars by setting funding priorities on the basis of sites’ relative risks to human health and the environment. “Relative risk” refers to the risk a site poses to human health and the environment compared with the risks posed by other sites. This comparison may also consider other important factors, such as communities’ concerns and legal requirements. EPA now manages sites on the National Priorities List according to a “worst sites first” policy. However, EPA may not know about all high-risk sites because states now increasingly decide which ones they will address under their own cleanup programs and which ones they want EPA to address through the Superfund program. Because states are managing these sites, EPA does not have information on the status of their cleanups. Without this information, EPA cannot assure local communities near high-risk sites that these sites are being addressed. Nor can EPA plan its own work in the event that the states require EPA’s assistance at these sites. Furthermore, because of the significant federal investment still needed to clean up hazardous waste sites on federal facilities and lands, it is important that other federal agencies likewise use their limited cleanup dollars efficiently by addressing the riskiest sites first. While the departments of Agriculture, Defense, and Energy have begun using risk to set priorities for cleanups to varying degrees, Interior, specifically the Bureau of Land Management (BLM), has not completed the first step—developing an inventory of its hazardous waste cleanup workload, estimated to cost billions of dollars. For sites that EPA has already placed on its National Priorities List and whose cleanup will be conducted or monitored by EPA, EPA provides funding according to their relative risk. However, EPA is not using relative risk as the primary basis for deciding what new sites to list. States are now assuming more responsibility for high-risk sites—those that are risky enough to be eligible for the National Priorities List. As a result, this evolving relationship with the states has created a need for closer coordination between EPA and the states with respect to sharing information on the status of cleanups, deciding who should address sites, and disseminating that information to the public. Currently, EPA cannot ensure that some of the worst sites are being addressed first, if at all, because some states may not be reporting all high-risk sites to EPA and, therefore, EPA may not know the full universe of such sites. Furthermore, states are not always recommending sites for EPA to address through the Superfund program because the sites present the highest risk to human health and the environment, but rather because they are too difficult and too expensive for the states to address. Once EPA places a site on its National Priorities List for cleanup, the agency uses relative risk to decide which ones to fund when priority setting is needed. Even though EPA’s policy has been to address the worst sites first since 1989, our prior work showed that the agency’s regions were setting priorities for early phases of cleanup on the basis of other factors, such as geographical considerations (e.g., funding equal numbers of sites in each state). In 1997, we reported that EPA had begun to give greater consideration to sites’ relative risks when setting priorities. Since then, EPA has continued to implement a nationwide process to set risk-based funding priorities for sites ready to begin construction of the cleanup method because it has had more sites to fund than dollars available. EPA does not go through a similar process for sites in earlier cleanup phases. Because it funds most of these sites so as not to delay them from moving through the cleanup process, EPA officials told us that they had a relatively small or no backlog of sites waiting to begin the earlier cleanup phases; therefore, they did not have to set funding priorities. In order to distribute its fiscal year 1996 funds to the backlog of sites awaiting construction, EPA created the National Risk-Based Prioritization Panel. This panel, which is composed of regional and headquarters cleanup managers, is to rank all of the sites ready to construct the cleanup method nationwide, primarily on the basis of the risks they pose. The panel uses five weighted criteria, four of which address health and environmental risks and one of which addresses considerations such as cost-effectiveness. The panel then ranks the sites and EPA, in turn, allocates funding for these sites according to this ranking. The sites that are not funded in one year can compete again for funding the following year. In our 1997 report, we determined that the panel used the ranking process to allocate fiscal year 1996 funds. However, because the panel process was new and the Congress did not pass EPA’s appropriations act until April of that year, we decided to continue monitoring the agency’s use of the panel process. We found that, in fiscal year 1998, EPA ranked 50 sites and funded 38 according to the panel’s ranking, at a value of more than $200 million. EPA does not know the universe of high-risk sites remaining to be addressed. EPA has never had its own site identification program and relies primarily on other entities, such as states and private citizens, to report sites for possible inclusion in the Superfund program. Early in the program, these entities referred tens of thousands of sites to EPA. Over time, however, according to site assessment managers in all four regions in our review, the states became reluctant to report sites because they wanted to avoid what they saw as the long and costly Superfund cleanup process. As a result, they currently do not necessarily report all high-risk sites to EPA. For those sites that are reported to EPA, the agency assesses the level of potential risk posed to human health and the environment by applying the hazard ranking system. Sites scoring at least 28.5 are considered eligible for the National Priorities List but are not automatically included. As state cleanup programs have matured, the states have assumed a greater role in determining which sites EPA will address under Superfund and which sites the states will address under their own cleanup programs. Most states have established enforcement programs similar to the Superfund program and, more recently, have used EPA grants to help establish voluntary cleanup programs. Consequently, many states prefer to use their own programs to address sites, including sites with risks high enough to make them potentially eligible for the National Priorities List. If a state does not want to assume responsibility for a cleanup, it can turn the site over to EPA. The states also have a greater role in deciding which sites get listed because EPA, as a matter of policy, seeks the relevant state governor’s concurrence before listing a site. EPA was required to seek concurrence under appropriations laws for fiscal years 1995 and 1996 and has since continued the practice. According to EPA officials, some governors are reluctant to concur, because placement on the list stigmatizes a site as one of the worst in the country, thus discouraging development. As of February 1999, governors had opposed the listing of 31 sites and supported the listing of another 123 sites. Since 1995, EPA has proposed only one site for listing without the relevant governor’s concurrence. Given increases in the states’ ability to address sites combined with EPA’s policy of seeking the relevant governor’s concurrence, EPA does not propose an eligible site for the National Priorities List until it enters into negotiations with the state to determine whether the state plans to take any action at the site through its own programs. If EPA anticipates that the state will clean up the site, the agency usually assigns the site a low priority for listing, according to cleanup managers from the four EPA regions. In addition, the managers said that they typically do not take any further action at these sites unless the state subsequently asks EPA to list the site. Therefore, according to these cleanup managers, decisions to propose sites for listing on the National Priorities List are not based primarily on the sites’ relative risks. Instead, states turn sites over to EPA for cleanup under the Superfund program when they have difficulties in getting responsible parties to pay for the cleanup, for example, or when they encounter a complex cleanup, such as one addressing groundwater problems. Consequently, EPA cleanup managers expect that future National Priorities List sites will be large, complex, and thus costly to clean up or will have either recalcitrant or no financially viable responsible parties to help pay for the cleanup. This trend could influence the future number and types of sites on the list. In the late 1980s to early 1990s, EPA proposed about 76 sites, on average, per year for listing. In the mid 1990s, this number dropped to 28 because EPA decided to concentrate more on completing cleanups for sites already listed. Although EPA has recently stated that it expects to return to an average listing rate of about 40 sites, this workload may depend on states’ concurrence. As the states’ roles in cleaning up high-risk sites have increased, EPA cleanup managers have noted that they do not know to what extent all high-risk sites are being addressed and cannot respond to public inquiries about the status of cleanups at the sites that the states are addressing. EPA administers the federal cleanup of abandoned hazardous waste sites that pose significant risks to public health and the environment. As the states increasingly take responsibility for cleaning up these sites outside the Superfund program, EPA must rely on the states to report on the sites’ status. Sometimes a state may later ask that EPA increase its involvement at a state-run cleanup. In order to plan their own workload and to respond to public inquiries, three of the four EPA regions in our review said that they would like more information on the status of state-led cleanups at certain high-risk sites that posed particular concerns for the regions. For example, cleanup managers in EPA’s Region I in Boston predicted that although only a handful of sites from their region would be placed on the National Priorities List in the future, they plan to monitor about 50 additional high-priority sites being addressed under state programs because of risks posed by particular hazards, recalcitrant responsible parties, or community concerns. Cleanup managers in Region IV in Atlanta expressed similar interest in tracking the status of certain high-priority sites in their region that are being cleaned up outside the Superfund program. Region II in New York is already piloting a project to monitor the progress of state cleanups to better plan its own Superfund workload if the state later decides to turn sites over to EPA, as well as to provide better information to the public on the status of these cleanups. Under the pilot project, EPA and New York are trying to electronically link the state’s database on the status of sites with EPA’s CERCLIS database so that EPA can better track sites. According to these regional officials, one reason for New York’s willingness to cooperate in this effort is that it could speed up the process of eliminating sites from further consideration for the Superfund program. On the other hand, cleanup managers in Region V said that they have enough information on the status of cleanups and do not need a tracking system because the states send letters to EPA notifying it that cleanups are either under way or complete. In November 1998, we reported on the need for better communication and coordination between federal and state officials to set priorities and determine cleanup responsibilities for high-risk sites. As of October 1998, according to EPA, the agency had about 10,400 sites in CERCLIS—the database EPA uses to track hazardous waste sites. Of these sites, 5,977 need further assessment or are candidates for removal from CERCLIS because no further EPA action is required. About 1,400 sites are on the National Priorities List. EPA classified the remaining 3,023 sites as potentially eligible for listing on the basis of the hazard ranking system. (See fig. 2.1.) Of these 3,023 sites, we found that approximately 1,234 have ongoing or completed cleanups outside of Superfund or are misidentified as eligible. The disposition of most of the remaining 1,789 sites, 307 of which were considered among the highest-risk sites, was uncertain. The state and federal cleanup managers did not know who would address them, under what programs, whether responsible parties would participate, or when the cleanup actions would begin. As a result of our findings, we recommended in November 1998 that EPA regions and the states coordinate their efforts to ensure that the highest-risk sites are addressed, assigning a lead agency as necessary. In response, the agency is planning to further assess the 307 highest-risk sites to determine whether it needs to take any immediate cleanup actions at these sites through its short-term removal program. For those sites that EPA and states have agreed should be placed on the National Priorities List, EPA does not use relative risk to decide which ones to list first. Although EPA initially uses its hazard ranking system as a screening tool to determine a site’s eligibility for listing, other factors, such as a governor’s concurrence or EPA’s inability to identify a responsible party willing to conduct the cleanup, will determine when EPA decides to propose a site for listing. Three of the four federal agencies with the largest cleanup workloads—the departments of Agriculture, Defense, and Energy—have implemented systems to set cleanup funding priorities on the basis of the relative risk sites pose. The Department of the Interior has not developed a central database of hazardous waste sites, estimated the resources it needs to address them, or developed an overall strategy to manage its cleanup workload. Given that current estimates predict federal agencies could spend more than $300 billion to clean up contaminated federal facilities, it is imperative that they spend this money effectively. Since 1995, we have encouraged these agencies to set risk-based priorities for applying their cleanup dollars to the backlogged sites waiting to be addressed. In 1995, the Federal Facilities Environmental Restoration Dialogue Committee—consisting of representatives from federal, state, local, and tribal governments, as well as citizens’ groups and labor organizations—reached a consensus that risk should be a primary consideration, among other factors, in setting cleanup priorities at federal facilities. These other factors include the cost-effectiveness of the cleanup remedies and their responsiveness to any cleanup requirements and concerns from the communities surrounding a facility. Likewise, in 1995, the Administrator announced EPA’s intention to promote risk-based priority setting at federal facilities and sites. The federal agencies in our review have responded to this call for setting risk-based cleanup priorities to varying degrees. (See table 2.1.) In the early 1990s, Agriculture’s Forest Service, which has accomplished the most significant portion of the Department’s cleanup activities to date, implemented a process to rank and fund sites on the basis of their relative risks. The Forest Service manages the National Forest System, including remote public lands that have been contaminated by the activities of other parties. However, in 1996, we reported that the Forest Service had made limited progress in completing an inventory of its potential hazardous waste sites, such as mining waste sites, a critical first step for effectively establishing priorities. Since that time, the Forest Service has made a concerted effort to identify its universe of sites and develop an inventory of them. The Forest Service has also used the results of its inventory to fund cleanups of the sites posing the most serious risks, while also requiring the parties responsible for the contamination to pay for some of the cleanups. According to Agriculture’s coordinator for hazardous waste cleanups and the Forest Service’s chief engineer in charge of cleanups, the Forest Service, as of January 1999, had completed its inventory of underground tanks, landfills, and abandoned hard rock mining sites. In completing the mining site inventory, which encompasses the largest number of sites remaining to be addressed, the Forest Service set standard procedures for its regions to identify sites with the potential to release hazardous substances and pose risks to human health and the environment. The regions ranked each site as posing a high, medium, or low relative risk depending on the presence of mining wastes or discharges; the site’s proximity to sensitive environments, such as wetlands; and applicable regulatory cleanup requirements. As a result of this nationwide inventory, the Forest Service has identified a total of approximately 39,000 abandoned mine sites, of which an estimated 1,800, or about 5 percent, are considered high priorities because they are or could be releasing hazardous substances. The Forest Service has not yet completed an inventory of sites contaminated by Defense activities on its lands, such as sites containing unexploded ordnance. This is mainly because the Forest Service has had very little information about these sites, according to Agriculture’s coordinator for hazardous waste cleanups. Defense and Agriculture have not fully implemented a 1988 memorandum of understanding for cooperation between the two agencies on this issue. Recently, Defense provided the Forest Service with a list of sites that Defense had used in the past and that the Forest Service now manages. However, the Forest Service would like Defense, which may be a potentially responsible party at these sites, to better identify its activities at those sites and the hazards that may be associated with those activities. As a start, the Forest Service would like to be included in Defense’s process for setting cleanup priorities and standards when addressing sites on National Forest System lands. In this way, Defense and Agriculture could begin to work together to clean up sites on lands that Defense could have contaminated. To help ensure that the federal agencies address these sites, EPA plans to establish a workgroup in the spring of 1999, according to the associate director of the Federal Facilities Restoration and Reuse Office. The workgroup will initially consist of EPA representatives and, later, other federal agencies to discuss how to accurately characterize the risks at these sites, set priorities among them, and fund their cleanups. Furthermore, according to a senior official in the Office of the Deputy Under Secretary of Defense for Environmental Security, Defense, Agriculture, and Interior are in the process of finalizing a new memorandum of agreement to establish the Inter-Agency Military Land Use Coordination Committee. The committee consists of senior policy officials from Defense, Agriculture, and Interior and has established five subgroups addressing issues such as the contamination and cleanup of public lands. The Forest Service has used its inventories to set cleanup goals, justify requests for additional cleanup funds, and allocate the funds it receives. On the basis of data from its inventories, the Forest Service has set a goal to clean up all of its high-priority hazardous waste sites by 2045, at a cost of approximately $2 billion. Funding for the Forest Service’s hazardous waste program has increased in recent years from approximately $7 million in fiscal year 1997 to approximately $12.5 million in fiscal year 1999. Furthermore, the administration has requested over a 70-percent increase for Forest Service programs, to $21.5 million for fiscal year 2000. This request is based on the Forest Service’s inventories of hazardous waste sites. To develop the Forest Service’s annual funding request, the regions select sites to submit to a round table of regional and headquarters staff for priority ranking. For example, the Forest Service’s regional office in Utah forwards approximately 10 to 15 sites to the round table each year depending on the relative risks the sites pose, the size of the cleanup workload the region can manage, and the degree to which parties responsible for the contamination are available and able to pay for the cleanup. The round table then ranks the sites from all nine regions using six factors, two of which specifically address risks to human health and the environment. The remaining four factors address such things as the cost-effectiveness of the proposed cleanup actions and any applicable statutory or regulatory cleanup requirements. The Forest Service uses this list to justify its cleanup budget requests to Agriculture and, eventually, the Congress. Agriculture’s coordinator for hazardous waste cleanups explained that once the Forest Service receives its cleanup budget, it allocates funds to the regions according to their ranked list of priorities, and the regions in turn spend the funds following these priorities. To supplement its limited cleanup funds, the Forest Service places priority on requiring responsible parties to clean up sites on its lands. In recent years, with EPA’s assistance, the Forest Service and Agriculture have negotiated and issued cleanup orders to these parties and sought reimbursement for its cleanup costs from responsible parties under CERCLA. For example, in fiscal year 1998, Agriculture estimates that the Forest Service was able to leverage its cleanup funding to produce more than $100 million in cleanup work funded by responsible parties. In 1994, Defense implemented a consistent process for identifying and funding most sites according to risk within each of its five environmental components—one for each of the three military services, one for all Defense-wide agencies, and one for formerly used Defense sites. Following detailed guidance from the Department’s Environmental Security Office, each of the components evaluates its sites and categorizes them into groups, depending on whether they pose high, medium, or low relative risks to human health and the environment. These components evaluate the nature and concentration of the site’s contaminants, the possible pathways for the contaminants to move from the site, and the opportunities for the contaminants to come in contact with humans. If the service or agency does not have enough information to evaluate a site, it must schedule the site for further study and conduct an interim cleanup action to address any immediate threats to public health and the environment. The components use the results of their relative risk evaluations to develop their budget requests and allocate funds accordingly. Each Defense component has its own, separate appropriations account for environmental restoration and decides for itself what percentage of its high-risk sites it will fund in any given year. In 1997, the most recent year for which data are available, Defense spent about 82 percent of its cleanup dollars (on average, departmentwide) on high-risk sites, for those sites evaluated. The Department does not set priorities for sites among its components, and they do not compete against each other for environmental funding on the basis of the relative risks at their sites. According to the manager within the Department’s Office of Environmental Security in charge of tracking cleanups, this has not been necessary because the components have been receiving sufficient appropriations to conduct scheduled cleanups. This would also be somewhat difficult, according to Defense’s environmental budget examiner, because once funds are appropriated, the Department cannot transfer funds between two appropriations accounts without obtaining statutory authority. Even though the Department does not set nationwide priorities, it has established a set of nationwide cleanup goals—to have cleanup remedies in place at 50 percent of all installations’ high-risk sites in 2002 and at 100 percent of these sites in 2007. The Department requires each component to annually certify that it is sufficiently funding its installations to meet these goals and biannually reviews the components’ progress. If the Department determines that a component has not included sufficient funds in its annual budget request to meet its goals for that year, the Comptroller will require the component to revise its request, according to the Defense manager in charge of tracking cleanups. In response to congressional concerns that the components were identifying too many sites in the high-risk category when requesting cleanup funding, in August 1997, the Office of Environmental Security developed more precise criteria for classifying sites’ risks as high, medium, or low. In 1998, we found that for more than 99 percent of the 6,000 sites we analyzed, the components’ classifications were consistent with the new definitions. We recommended, however, that the Department provide more specific Defense-wide categories to aid in priority setting. The Department’s Secretary for Environmental Security does not agree with this recommendation, stating that more refined categories of risk are currently being applied by individual installations. In 1995, Energy developed procedures for considering the relative risks of its environmental management activities to help its facilities, or operations offices, establish cleanup funding priorities. Since then, Energy has continued to set priorities for its cleanups on the basis of risk and other factors at its operations offices. The Department’s Office of Environmental Management requires its operations offices, in preparing their budget requests, to rank all of their proposed environmental management projects, which include CERCLA cleanups; activities required to maintain safe operations related to nuclear materials; and activities required to close, clean, and transfer property. Operations offices are to classify proposed cleanup projects as high, medium, or low priorities considering, at a minimum, seven core criteria, three of which relate to the level of risk to workers, the public, and the environment. The Department’s Office of Environmental Management also provides guidance for operations offices to evaluate projects in four additional areas: (l) compliance with federal, state, and local cleanup regulations; (2) support for crucial operations at the site; (3) the potential for reducing costs; and (4) responsiveness to local citizens’ concerns. Considering the results of the evaluations in these seven areas, each field office uses its own priority-setting system to rank its cleanup projects along with all of its other environmental management projects and submits this combined list to headquarters as part of the Department’s budget request. Once the operations offices receive their budget allocations, they distribute the funds across projects according to the rank-ordered list. For example, Rocky Flats, a facility that is preparing for closure, has assigned the highest priority to those environmental management projects needed to maintain safety, such as security systems for the plutonium and other special nuclear materials at the site. In its overall plan to close the facility, Rocky Flats used its own system to qualitatively rank-order all remaining projects, including CERCLA cleanup projects, and established a time line to complete them. To develop the sequence of projects, program managers considered the extent to which each one (1) reduces risks to human health and the environment as well as costs, (2) helps the facility progress toward closure, (3) cleans up the site, (4) complies with regulatory requirements, and (5) improves contractors’ performance and the site’s overall management. According to the risk expert at Rocky Flats, the resulting rankings will remain relatively constant from year to year; however, the time line or sequence for completing the projects may change. Because EPA was concerned that CERCLA cleanups may not rank high enough when compared to other efforts to clean up or manage nuclear materials, Energy agreed to work with EPA’s federal facilities manager for Rocky Flats to annually select and fund a maximum of 12 of EPA’s highest-priority environmental cleanup projects. EPA’s federal facilities manager then monitors to ensure that Rocky Flats completes these projects in accordance with this cleanup agreement. The Oak Ridge Operations Office, an active facility where CERCLA cleanups compete with ongoing operations, also assigns the highest priority to funding the environmental management activities needed for safety at its facilities. To set priorities among the remaining environmental projects, including the on-site treatment of waste, CERCLA cleanups, and the demolition of contaminated buildings, Oak Ridge uses its own quantitative system to rank order these activities following Energy’s seven relative risk areas. The Operations Office reviews and adjusts this ranking as necessary two to three times a year. Using these rankings in accordance with their 1992 cleanup agreement, EPA and the Oak Ridge Operations Office are to select their highest-priority environmental cleanup projects and set time lines for completing them. Recently, however, EPA has not been satisfied with its level of involvement in these decisions and is concerned because fiscal year 1999 is the third consecutive year that Energy has postponed funding some cleanup activities and asked EPA to extend the time lines of its cleanup agreement. Consequently, EPA has begun the formal dispute process provided for in the cleanup agreement with Energy. The agreement establishes penalties and possible fines for failure to comply with its schedule or terms. Energy does not allocate funds across operations offices according to any nationwide ranking of projects. We previously reported that Energy would continue to make limited progress in cleaning up environmental problems if it did not set national priorities for cleanups. Thus, we recommended that Energy set national priorities and allocate its resources accordingly. To date, the agency has not adopted this recommendation. According to a senior official in charge of strategic planning, Energy prefers to allow local decision-making when setting priorities among its environmental management projects because of the unique requirements posed by local regulations, community concerns, and the types and extent of contamination. As a result, according to a senior analyst in Energy’s Office of Budget and Planning, Energy continues to allocate an environmental management budget to each operations office that is based on the extent and nature of the work required at its sites and the risks they pose. The amount of the environmental budget for each operations office varies very little from year to year, but the amount that operations offices use for CERCLA cleanups can vary substantially, depending on other competing funding priorities at the facility. For example, a contractor’s unforeseen costs at one facility resulted in delays of certain CERCLA cleanups agreed to with EPA. Energy also had previously stated that it could not adopt a nationwide priority system because it did not have the necessary data to do so. However, several of Energy’s senior environmental managers at Oak Ridge and Rocky Flats and EPA’s federal facilities managers for Energy’s Hanford facility and Idaho National Engineering and Environmental Laboratory stated that it is feasible for Energy to produce a national list of risk-based cleanup priorities. Energy officials acknowledge that although it may be technically feasible, it is not practical and could be counterproductive, stating that altering agreements reached with local stakeholders to accommodate a new national, risk-based prioritization scheme would cause significant disruption and legal challenges. As we reported in 1997, Interior—in particular, its bureau with the largest number of potential cleanups, the Bureau of Land Management (BLM)—does not have a comprehensive inventory of its hazardous waste sites, an essential step for setting risk-based cleanup priorities. Early estimates by BLM indicate that it faces a substantial cleanup workload, potentially costing billions of dollars, yet it has not systematically assessed the full extent of its cleanup problems. For example, in 1996, BLM estimated, on the basis of available data from sources such as the U.S. Bureau of Mines and the U.S. Geological Survey, that it had 70,000 to 300,000 abandoned mining sites on its lands. On the basis of sample field tests, BLM further estimated that about 4 to 13 percent of these sites—a range of 2,800 to 39,000 sites—may have contaminated material that poses potential risks to human health and the environment and must be addressed. Assuming that Interior would, at a minimum, conduct short-term removal actions at these sites, which can cost up to about $40,000 each according to the manager of BLM’s hazardous materials program, the cost of cleaning up these sites could range from $112 million to $1.5 billion, while more extensive cleanups could cost billions of dollars. These estimates do not include the costs of cleanups at other BLM sites, including landfills, illegal dumping areas, and underground storage tanks. BLM employs a reactive approach to cleanup by addressing hazards at sites after they have been identified by other federal and state agencies. Sometimes hazards are identified following injuries to citizens or livestock. The resulting patchwork of information is insufficient to develop an effective cleanup strategy for addressing the worst sites first. Furthermore, by not getting responsible parties to perform or pay for cleanups under CERCLA, BLM could cause the federal government to incur greater costs in the long run. Cleanup managers differ on the need for an inventory. For example, the manager of BLM’s hazardous materials management program has not supported the development of a more comprehensive inventory of BLM’s hazardous waste sites, stating that each of BLM’s 12 state offices has already discovered most large hazardous waste sites. However, several BLM hazardous waste specialists in the field disagreed, stating that some field offices continue to find large, high-risk sites each year. These specialists believe that a comprehensive database of known sites and a process for identifying new sites would help the field offices better identify high-priority sites, develop a cleanup strategy, and justify cleanup budget requests. Although BLM’s state offices have completed portions of inventories and headquarters is beginning to electronically organize these portions, the data constitute a patchwork of inconsistent information. Environmental and abandoned mines specialists we talked with, representing six of BLM’s state offices, have either attempted their own surveys or have relied on joint efforts with other federal agencies and states. For example, BLM’s Nevada office participated in a small pilot program to identify environmental problems associated with abandoned mine sites but does not have an inventory of other sites, such as landfills and illegal dump sites. A lack of funds is the primary reason why BLM is reluctant to proactively survey its lands, according to BLM environmental managers at headquarters and in the field. The state field offices use much of BLM’s annual hazardous materials budget—approximately $15 million in fiscal year 1998—to conduct emergency removals of hazardous materials. According to BLM, it accomplishes hundreds of such cleanup actions each year, ranging from the removal of debris that has been dumped on public lands illegally to the closure of water-polluting abandoned mines. As a result, developing an inventory is often not a high priority. In comparison, however, the Forest Service, with a similarly small environmental cleanup budget, was able to complete its inventory by funding it over several years, leaving some money for ongoing cleanups and removals on an annual basis. Several of BLM’s state offices have leveraged state funds to complete portions of inventories. For example, BLM’s state offices in Wyoming, Colorado, and Montana have mine inventory data because the states paid for surveys using a federal reclamation fund financed by coal-mining fees. Some BLM state offices have been able to develop information on abandoned mines by participating in several water quality initiatives. For example, BLM works with other federal agencies and state and local authorities to obtain funding under the administration’s Clean Water Action Plan to clean up watersheds selected by the state. As part of this effort, it is necessary to identify and survey surrounding abandoned mines as possible sources of the watersheds’ contamination. Another reason BLM is reluctant to identify potential hazardous substance release sites, according to both the manager of the hazardous materials management program and experts at BLM’s National Applied Resource Sciences Center, is that BLM officials believe, once the sites are identified, BLM may be held financially liable for thousands of abandoned sites that it did not contaminate, particularly abandoned mine sites. Furthermore, these officials worry that once the hazardous waste sites are identified, EPA will place the sites on the federal facility docket and the sites will then be subject to what the officials perceive as the burdensome and costly requirements of a remedial action under CERCLA. BLM has no comprehensive strategy for managing the cleanup of its sites and has been reluctant to seek reimbursement for cleanup costs or issue orders to responsible parties to clean up sites under CERCLA as part of, or in conjunction with, other cleanup programs. While the Forest Service has used these CERCLA authorities to get responsible parties to pay for or perform cleanups, BLM has not yet adopted a similar cleanup enforcement strategy. BLM managers gave several reasons for their reluctance to get responsible parties to perform or pay for cleanups under CERCLA at more sites. First, BLM officials do not see the benefit of expending large portions of their small cleanup budget on what they describe as the expensive and time-consuming investigations and analyses required for a remedial action under CERCLA. Agency officials contend that, unlike the Forest Service, BLM has less chance of finding responsible parties to pay the cleanup costs for most of these mines because BLM’s mines are decades old. Second, BLM prefers to conduct short-term removals, rather than the longer-term and generally more expensive cleanups sometimes required for remedial actions under CERCLA, because in the vast majority of cases, the managers contend, these are sufficient for the types of sites and the level of risk they pose. While this could be true for a large portion of its sites, early BLM estimates still indicate that BLM may have a number of high-risk sites to address that may require more extensive cleanups. Third, BLM has increased its use of watershed initiatives as the programmatic vehicle for conducting cleanups because they involve fewer detailed procedures and because funding is available for them. However, the disadvantage of these types of cleanups, according to an EPA environmental specialist for federal facilities, is that they focus on surface water and may ignore other problems, such as contaminated groundwater. In any case, BLM could still use its enforcement authority under CERCLA in conjunction with watershed initiatives, as the Forest Service does, to effectively get more cooperation from responsible parties, so that taxpayers’ money can be more effectively used to clean up sites where no responsible parties can be found. Since 1995, Interior has set aside approximately $10 million annually to fund relatively long-term and large-scale cleanup projects, and it allocates these funds according to the relative risks posed by the sites. A technical review committee consisting of staff from the Department and its bureaus meets annually to review and rank the sites the bureaus submit and monthly to monitor the cleanup progress at the sites that have been selected and funded. The committee considers four factors: (l) the risks posed to human health and the environment, (2) applicable legal and regulatory cleanup requirements, (3) the potential for responsible parties to participate in the cleanup, and (4) the estimated time and cost of the cleanup. According to officials in Interior’s Office of Environmental Policy and Compliance, once the Department receives its annual appropriation, it allocates the $10 million to cleanup projects in accordance with priorities set by the committee. Interior funded cleanups at nine sites in 1997 under this process. In 1998, the Department continued funding seven of these sites and added three others. Several BLM cleanup managers have not submitted sites to compete for these funds because they either did not know that the funding was available or had erroneous information about how the process works. For example, some cleanup managers were discouraged from submitting sites because they believed that most of the funds were already earmarked for a handful of large, complex sites on EPA’s National Priorities List for several more years in the future. Although senior environmental program managers at Interior acknowledged that one National Priorities List site did consume most of the funding for a few years, Interior’s cleanup responsibilities at this site are ending. Consequently, according to these managers, more funds are becoming available for other high-risk sites in the future. In addition, BLM cleanup managers stated that they thought only remedial cleanup actions qualified for funding and because the vast majority of their cleanups are removals, they rarely, if ever submit cleanups for consideration. Interior officials acknowledged that the distinction is not clear but said that the Department has sometimes funded large-scale removals. EPA is setting risk-based funding priorities for cleanups at sites on its National Priorities List. EPA will not necessarily be listing the highest-risk sites in the future, however, because states are more frequently deciding which sites to ask EPA to list for Superfund cleanups and are basing these decisions on factors such as the technical complexities of a cleanup and the availability of responsible parties to share in the cleanup costs. Currently, it is uncertain who will take responsibility for cleaning up approximately 1,789 sites that are potentially eligible for the National Priorities List, 307 of which are considered among the highest-risk sites. Unless EPA regions work with their states to implement our earlier recommendation to determine who is responsible for each site’s cleanup, as well as to better share information on the status of certain high-risk sites that were found eligible for the National Priorities List and are now being addressed by the states, the agency cannot manage its own workload in the event that the states seek EPA’s assistance in the future. Nor can EPA respond to community and congressional inquiries about the cleanup status of some of the riskiest sites. Each of Defense’s services and agencies is also setting risk-based funding priorities for its sites nationwide. The Department is no longer setting funding priorities nationwide across these services and agencies, in part because each of them now receives its own cleanup appropriation and has been receiving enough funds to complete planned cleanups in recent years, according to agency budget officials. To some extent, Defense did consider its nationwide priorities when it implemented its set of long-term goals for completing cleanups at all of its sites. It is also considering these priorities as it monitors its progress toward achieving these goals. However, we did find that Defense may not have fully coordinated its cleanup efforts with the Forest Service to address hazards that Defense may be responsible for on National Forest System lands. Until it does so, some federal waste sites may not be adequately addressed. Energy has instituted a risk-based prioritization scheme for its operations offices but does not set priorities nationwide among these offices. Although the Department believes that local priority setting is more appropriate because each facility has unique local regulations, community concerns, and contamination problems, we continue to believe that unless Energy sets nationwide priorities, it cannot make the most informed budget decisions and support budget trade-offs among its facilities, as necessary. Until the Bureau of Land Management and, therefore, the Department of the Interior, define the extent of their cleanup responsibilities, determine the strategies they will use to pursue cleanups, and consider how to use CERCLA as a tool in this strategy, they cannot present strong justification for more cleanup funds or effectively set priorities for using their current cleanup resources. As a result, thousands of sites on BLM lands could continue to pose risks to human health and the environment, and federal cleanup costs could rise if responsible parties are not found and made to pay for the sites’ cleanup. To help EPA regions better plan their cleanup workload and be responsive to local communities’ concerns about hazardous waste sites in their areas, we recommend that the Administrator, EPA, task the agency’s regional offices to work with the states in their regions to determine how to share information on the progress of cleanups at those sites of highest risk or concern considering any successful efforts currently under way in the regions. To ensure that all federal waste sites are being adequately addressed, we recommend that the Secretary of Defense and the Secretary of Agriculture direct the Deputy Under Secretary for Environmental Security and the Chief of the Forest Service, respectively, to work together to clarify cleanup requirements for lands with former or current Defense activities that may pose risks to human health and the environment. Furthermore, we recommend that the Department of Defense, in consultation with the Department of Agriculture, work to ensure that these cleanup requirements are met. To more effectively use its limited cleanup funds and better leverage funds from responsible parties to clean up its hazardous waste sites so as to protect the public and the environment, we recommend that the Secretary of the Interior direct the Assistant Secretary for Policy, Management and Budget; the Assistant Secretary for Lands and Minerals Management; and the Solicitor of the Interior to work together to ensure that the Bureau of Land Management (l) develops a national database for all of its known hazardous waste sites and abandoned mine sites; (2) develops and implements a strategy for updating its national database, which includes collecting new information on potential hazardous waste sites and abandoned mines in a consistent manner across all of its state offices; (3) develops and applies a mechanism for setting cleanup priorities among sites on a nationwide basis using risk and other factors, as appropriate; (4) develops a comprehensive cleanup strategy, including specific goals and time lines for cleaning up the sites, on the basis of their risk-based priorities; and (5) develops nationwide procedures for conducting searches of potentially responsible parties and for using CERCLA authorities, where appropriate, to get more responsible parties to perform or pay for cleaning up contamination; and all of Interior’s bureaus and regional offices understand the purpose and size of the Department’s Central Hazardous Materials Fund and the criteria the Department uses to allocate dollars to cleanups, including both remedial and removal actions. We met with or obtained comments from cleanup program managers from EPA, Agriculture, the Forest Service, Defense, Energy, Interior, and the Bureau of Land Management, who generally agreed with our findings and recommendations, with one exception. The agencies also suggested several changes for technical accuracy and clarity, which we incorporated where appropriate. EPA agreed with our findings and acknowledged that it needed to work with the states to coordinate cleanups and obtain the information needed to track the status of state cleanups. Agriculture and Defense fully concurred with our findings and agreed to our joint recommendation to their agencies to better coordinate their efforts to clean up previously used Defense sites. Energy disagreed that it needed to act on our earlier recommendation to adopt a nationwide risk-based process for setting priorities among its sites. The Department stated that all of its operations offices receive a relatively stable budget that is based on the general needs and risks of their environmental management activities. Once the operations offices’ receive their budgets, they determine their own priorities for cleanup. Energy stated that local control of priority setting is preferable to a national strategy because each site has unique regulatory requirements, community concerns, and contamination. Nevertheless, we continue to maintain that developing nationwide cleanup priorities would help the Department to make informed budget decisions and analyze trade-offs among its facilities. Interior and its Bureau of Land Management generally agreed with our findings and said they would develop a plan for addressing our recommendations. However, BLM provided several points of clarification. First, BLM did not think it was cost-effective to undertake a comprehensive inventory of sites, stating that it currently has more cleanups than it can fund and already knows its worst sites. We continue to maintain, however, that BLM cannot effectively use its limited cleanup funding until it determines the extent of its cleanup workload and sets risk-based priorities for its cleanups. Furthermore, we determined that BLM state offices continue to find high-risk sites each year. Second, BLM stressed that it uses other authorities besides CERCLA, such as the Mining Law of 1872, to address some sites and has always had a policy that the polluter should perform the cleanup work wherever possible. We acknowledged BLM’s use of these other authorities in our report but continue to recommend that the agency more effectively include CERCLA as one of the tools available for obtaining the full cooperation of parties potentially responsible for contamination in conducting and paying for cleanups. Third, BLM asked us to acknowledge that it has taken actions such as removing debris at sites and closing abandoned mines for safety reasons, and we added this information to the report. Finally, BLM stated that another reason it rarely, if ever, nominates sites for funding from Interior’s Central Hazardous Materials Fund is because it believes the proposed cleanup must be a remedial, not a removal, action. However, Interior officials stated that large-scale removals sometimes qualify for funding. We revised our report to include BLM’s reason for not nominating sites for funding. However, we believe that BLM’s uncertainty about whether removal actions qualify for funding underscores our finding and recommendation that Interior needs to more clearly communicate the criteria it uses to allocate cleanup funds. The federal government has lost the opportunity to try to recover up to $2 billion from responsible parties because EPA has assessed them only a portion of the indirect costs that it incurred in operating the Superfund program. Although the ultimate goal of the program is to clean up sites, EPA may recover its costs, including indirect costs, from responsible parties. In response to new federal cost accounting standards, EPA is revising its indirect cost rate to more fully account for costs, but the agency has not applied the revised rate to parties for cost recovery purposes. In addition, EPA cannot evaluate how well it is recovering costs because it has not established performance measures that compare what it could have recovered with what it actually recovered. Finally, the agency is improving its information systems so that it can (1) better determine costs and locate key supporting evidence and (2) better track the status of recoveries. EPA has met one of its primary goals—getting responsible parties to pay—for more than 70 percent of the long-term cleanups conducted over the past few years. However, EPA has had less success changing cost recovery policies that exclude a significant portion of its indirect cleanup costs from its cost recovery efforts. When EPA pays for the costs of cleanups, it incurs both direct and indirect costs. Direct costs are those that can be attributed directly to a site, such as the cost to pay a contractor to remove hazardous waste from the site. Indirect costs are those that cannot be attributed to an individual site and, thus, are prorated across all sites, such as the administrative costs of operating the Superfund program. EPA’s current method of calculating the indirect cost rate excludes significant portions of the agency’s indirect costs. EPA estimates that since the beginning of the Superfund program, responsible parties have agreed to perform cleanups worth $15.5 billionand it has spent about $15.9 billion to clean up hazards caused by private parties. EPA considers about $5 billion of its costs unrecoverable because, for example, financially viable responsible parties could not be found or the agency reached final settlements with responsible parties to pay less than all of the past cleanup costs owed to the agency. Of the remaining approximately $11 billion in Superfund expenditures, EPA had entered into agreements to recover about $2.4 billion, or 22 percent, through the end of fiscal year 1998. (See fig. 3.1) Although EPA has obtained settlements to recover $2.4 billion, it has lost the opportunity to recover up to another $1.9 billion of indirect costs because it did not revise its indirect cost rate to include all appropriate costs. In the earlier years of the Superfund program, the agency took a conservative approach to allocating indirect costs to private parties because it was uncertain which indirect costs the courts would agree were recoverable if parties legally challenged EPA. Starting in 1989, we expressed concerns about this approach because of the substantial dollar amounts EPA was not seeking to recover and return to the federal Treasury. EPA recognized the need to revise this practice and, in 1992, proposed a rule that would have allowed it to expand the types of indirect costs it would attempt to recover. However, because it received so many negative comments on this proposed rule, EPA did not publish a final rule and did not increase its rate. Now, however, EPA has the opportunity to address this issue. In response to updated governmentwide accounting standards, EPA began to implement a new cost accounting system. As part of this process, EPA’s Financial Management Division developed a new indirect cost rate that will better account for the agency’s indirect costs. The director of EPA’s Program and Cost Accounting Branch, Financial Management Division, believes that the new rates, if implemented, could significantly increase the costs charged to responsible parties. According to EPA’s cost recovery program managers, they are waiting until the methodology used to develop the new rate is reviewed and approved by EPA management, the Department of Justice, and an independent accounting firm hired to review the methodology before adopting it for the Superfund program. According to EPA, the methodology could be approved by September 1999. If the program adopts the new rate, the agency could increase its recovery of indirect costs. For example, according to EPA’s estimates, through fiscal year 1998, the agency excluded about $1.3 billion in indirect costs at sites where it had not yet agreed to a final settlement with the parties. The agency estimates it could recover $629 million, or 49 percent, of these costs. EPA estimates it will not recover the remaining $662 million, or 51 percent, because, for example, there may be no financially viable parties at some sites. In addition, EPA regions may decide, as is consistent with the agency’s policy, not to pursue recoveries at sites where the total cleanup costs are less than $200,000 because such efforts may not be cost-effective. EPA’s existing cost recovery goals and measures do not facilitate effectively evaluating and improving the agency’s cost recovery performance. EPA’s cost recovery program managers stated that the agency’s current goals for the program are to seek the recovery of all funds expended at sites, where appropriate, and to take cost recovery actions at all cleanup sites before the agency’s authority to do so expires. However, EPA cannot use these goals to effectively monitor its performance because the goals do not fully reflect its progress in recovering costs. Since 1991, we recommended additional goals and measures for the cost recovery program. We and others, including EPA in its own past management review of the program, recommended that the agency better track and compare the costs it actually recovers with the costs it could have recovered. Establishing performance measures to better track the outcome of cost recovery efforts is consistent with the Government Performance and Results Act of 1993, under which agencies must set such measures. We also previously recommended that the agency establish a goal to take earlier action on cases, rather than focusing just on taking action before its authority expires, because early action reduces the probability that a responsible party’s financial condition will decline, making cost recovery more difficult. Although EPA reports the amount of funds it obtains in cost recovery settlements in a given fiscal year, it does not compare this amount with the total amount of funds it could have recovered from this set of settlements. Such a comparison could allow the agency to better measure its performance on a consistent basis. Tracking its rate of recovery over time and the main reasons for fluctuations in the rate from year to year could help the agency better understand how well it is achieving recoveries and what improvements it could make in its recovery program. In the past, we showed that it is possible to compute such a measure. We reported that in fiscal year 1989 (the most recent year for which data were available at the time), responsible parties agreed to reimburse EPA for 59 percent ($116 million) of its program costs, leaving about $80 million in unrecovered costs. We recommended that EPA use this percentage as a performance measure to show the extent to which EPA has been reimbursed for its costs. However, the agency has raised two primary concerns about doing so. First, EPA is concerned that if it develops the percentage of dollars recovered, responsible parties may misinterpret the figure as the percentage EPA is willing to accept and not agree to pay a higher percentage during settlement negotiations. We believe that if EPA has an appropriate negotiation strategy and is willing to issue orders or pursue litigation when negotiations fail, then responsible parties’ knowledge of EPA’s performance measure should have little effect. Second, EPA notes that an increase or decrease in the percentage of costs it recovers each year may be based on factors outside its control. For example, in a given year, EPA could have a proportionately larger number of cases with insolvent parties, decreasing the percentage of recoveries that year. However, we believe that tracking increases or decreases in the percentage of recoveries compared with what EPA defines as potentially recoverable costs would account for these fluctuations because factors outside EPA’s control, such as insolvent parties, could be identified as not recoverable by EPA and taken out of the calculation. Without systematically tracking its rate of recovery and analyzing the reasons for differences in these rates, EPA cannot determine if the differences are due to internal factors that it can address, such as poor cost documentation or inexperience on the part of its negotiators, or external factors outside its control, such as the absence of financially viable parties. Under CERCLA’s statute of limitations, EPA must generally initiate cost recovery actions within 3 years after it completes a removal action or within 6 years after it begins the physical construction of a remedial action. EPA’s goal is to take action on all cases with cleanup costs of $200,000 or more within these time frames. EPA took cost recovery actions before the limitations period expired at 100 percent of the sites in fiscal year 1997 and at almost all sites in fiscal years 1996 and 1995 as well. EPA’s guidance encourages the regions to take action on cost recovery cases even earlier than this—either within 12 months after a removal action is completed or within 18 months after the construction of a remedy is initiated—but the agency does not regularly track how well the regions are meeting this guidance. Taking early action on cases is useful because the longer EPA waits to take an action, the greater is the likelihood it will lose evidence, the financial condition of the responsible parties will deteriorate, or the limitations period will expire. In 1995, we reported that limitations in EPA’s automated information and financial systems prevented cost recovery staff from relying on these systems to provide all of the data the agency needed to manage cost recovery actions. EPA’s ability to recover costs can be impaired if documentation of work performed and its costs cannot be located or if the information is inaccurate. To ensure that the information supporting cost recovery cases was accurate, staff had to perform excessively time-consuming and inefficient manual searches and reconciliations. For example, because EPA’s financial system could not record cleanup costs for each subcomponent of a site, called an operable unit, EPA staff had to assign costs to operable units manually. Also, although EPA had a system to electronically capture and store certain financial documents, such as invoices, showing the costs of work performed at sites, it did not have the capability to electronically store documents showing the types of work conducted at sites in all of its regions. Recently, EPA has taken actions to address these two issues. First, EPA updated its financial system in 1997. As a result, staff no longer have to manually assign costs to operable units. However, staff still have to manually assign costs entered into the system before the update. Second, EPA is implementing an imaging system—called the Superfund Document Management System—to electronically store and retrieve documents in all of its regions. This action will reduce the labor-intensive manual process staff use to compile hardcopy documents. EPA also needs an accurate account of the costs it cannot recover, such as those spent at sites with no financially viable parties, in order to judge the success of its cost recovery efforts, forecast the amounts of future recoveries, and establish its budget requirements for the Superfund program. However, in 1994, we reported that the Superfund management information system produced reports that did not present an accurate picture of the costs that EPA cannot recover. In addition, we found that EPA could not regularly produce reports on the status of recoveries because the costs spent on cleanups were contained in the financial management system while the costs recovered were contained in the Superfund management information system and the two systems were not compatible. According to EPA cost recovery managers, the management reports that EPA uses to forecast future recoveries and determine budget requirements can understate the costs that EPA cannot recover at sites. When EPA regions determine that costs at a site cannot be recovered, the regions enter these costs as unrecoverable into the information system. Yet after making these initial entries, EPA could still incur additional costs at the site. According to cost recovery managers in EPA’s Policy and Program Evaluation Division, regional staff do not always include these additional unrecoverable costs in the system. To fix this problem, EPA plans to use a link between the financial accounting system and the Superfund management information system to automatically update the unrecoverable costs expended at sites. Also, according to EPA cost recovery staff, using the link between the two systems will allow EPA to regularly produce cost recovery status reports as needed. EPA cost recovery staff said that information on direct costs is being transferred between the two systems, but the transfer of information pertaining to indirect costs may not be completed until early in fiscal year 2000. EPA has not recovered billions of dollars because it has understated the amount of indirect costs it charges to responsible parties. Until EPA adopts a new method of allocating indirect costs to parties, it will continue to forgo federal funds that it could, in turn, use to accomplish more cleanups. Additionally, until EPA responds to our prior recommendation that it adopt a more meaningful performance measure that compares what it recovers with what it could have recovered in aggregate on an annual basis—tracks the measure over time, determines the major causes of significant fluctuations, and assesses the need for any actions to address identified problems, the agency cannot demonstrate its progress in recovering costs. Finally, EPA has responded to our past concerns and has modified its information systems to decrease its reliance on inefficient manual processes and provide better data on its recovery of costs. However, until the agency fully completes the transfer of cost data on site cleanups from its financial management system to its Superfund management information system, expected in early fiscal year 2000, the agency will not have all of the information it needs to determine the status of recoveries and unrecoverable costs and to accurately project future budget needs for the Superfund program. To improve EPA’s ability to recover cleanup costs from private parties, we recommend that the Administrator, EPA, ensure that the Superfund cost recovery program applies the agency’s new indirect cost rate as soon as it is approved as part of cost recovery settlements. We met with EPA officials, including the Director of the Policy and Program Evaluation Division within the Office of Site Remediation and Enforcement and the Chief of the Program and Cost Accounting Branch in the Financial Management Division, the offices responsible for managing the cost recovery program, to obtain their comments on our discussion of cost recovery issues. EPA generally agreed with the content and presentation of information regarding its recovery of indirect costs and is considering establishing a performance measure to better evaluate the progress of the program. EPA questioned the need for a more formal goal to take earlier action on cost recovery cases, however. The agency, in conjunction with the Department of Justice, and an independent accounting firm hired to review the methodology, expects to approve the new methodology for computing indirect costs by the end of September 1999. Subsequently, the agency could develop a new indirect cost rate to charge indirect costs to responsible parties. In relation to creating better performance measures to evaluate the program, the agency noted that currently, the recovery program managers annually make an estimate of the amount the agency expects to obtain as a result of cost recovery actions. The agency agreed to consider whether this estimate could serve as the basis of a performance measure for the program and whether EPA could track the amount obtained against this estimate. In addition, EPA provided us with more current cost recovery data through the end of fiscal year 1998 and we revised the relevant figures in our report accordingly. We also made several technical changes to the report based on EPA’s comments. EPA has responded to our past concerns—that it was not completing Superfund contract audits, using independent estimates to set the best contract prices for the government, and controlling some contractors’ overhead costs. However, its actions have been slow and some have not gone far enough to protect the government from exposure to unnecessary costs. EPA has reduced its backlog of required contract audits and is more frequently using its own estimates of what cleanup actions should cost to negotiate contract prices. However, EPA regions have some poorly prepared cost estimates and do not always effectively use them to negotiate the best prices for the government, in large part because some managers lack cost-estimating experience and training, as well as historical data on actual cleanup costs to help them develop estimates. In addition, while EPA has taken steps to reduce contractors’ high program support costs, these costs continue to be high for a majority of EPA’s new Superfund contracts. EPA is addressing some of these concerns through its “Contracts 2000” improvement team, but it does not have a plan with milestones for implementing corrective actions. At the time of our 1997 review, EPA had a backlog of more than 500 required Superfund contract audits. The purpose of these audits is to evaluate the adequacy of contractors’ policies, procedures, controls, and performance. The audits are necessary for effective management and are a primary tool for deterring and detecting fraud, waste, and abuse. An audit backlog increases the potential for problems to go undetected or uncorrected, especially if, for example, a contractor goes out of business before an audit is completed. Since that time, both EPA’s Office of Inspector General, which is responsible for periodically auditing the agency’s contractors, and the Defense Contract Audit Agency (DCAA), which conducts audits of EPA contractors when EPA is not the primary agency providing work and funding to the contractor, have reduced their backlogs and are trying to perform audits within defined time periods. For instance, staff within the Office of Inspector General stated that the office has established a goal to perform an audit within 2 years of when EPA requests it. For contractors that submitted the necessary information, the office was able to perform almost all of the audits within this time frame, and it plans to perform the remaining audits during fiscal year 1999 to be in full compliance with this goal. EPA is also working with the remaining contractors to obtain complete information in a timely manner. DCAA officials said that the agency began an initiative in the early 1990s to address its backlog and become current by 1997. It reached its goal and has been able to perform audits within 1 year of when larger contractors submit complete information and within 2 years of when smaller contractors do so. While we did not review the quality of these audits, conducting them in a more timely manner should help EPA to reduce the risk of fraud, waste, and abuse in the use of Superfund contract dollars. EPA is now generating independent estimates of what contract work should cost and is using them to negotiate lower contract costs. However, EPA’s estimates are still often lower than the final contract price agreed to by EPA—an indication that the estimates are of poor quality, according to the agency’s Financial Managers’ Financial Integrity Act Report. In a number of other cases, the final contract price matched the contractor’s estimate, an indication that EPA may not be negotiating for a better price. EPA has only recently begun to address the two barriers to better cost estimates that its contract managers identified: (1) their inexperience and insufficient training and (2) the lack of a database of past actual contract costs to help them better determine what future contracts should cost. EPA is designing corrective measures for these barriers but has had past problems in getting the regions to fully adopt such measures. In our prior reports, we stated that EPA needed to develop its own estimates of what the work intended for its Superfund contractors should cost and use these estimates to negotiate the best contract prices for the government. This is a practice the U.S. Army Corps of Engineers (the Corps)—an agency with cost-estimating and contract management expertise—uses to manage environmental cleanup costs. In subsequent reviews of EPA’s contract management, we found that the agency had begun to develop such estimates, but their quality and use varied among the regions. In 1997, we reviewed 26 work assignments that EPA had issued to contractors and found that EPA had prepared cost estimates for 21, or about 80 percent, but did not routinely use these estimates to negotiate lower contract prices. EPA accepted the contractor’s estimate as the final contract price for each of the 21 assignments. In our current review, we found that EPA was using its estimates more effectively. We reviewed the 35 highest-dollar-value work assignments in three regions and found that a cost estimate had been prepared for all of the assignments and that EPA had accepted the contractor’s estimate as the final price in 10, or 29 percent, of the cases. According to EPA’s criteria, a key measure of the quality of EPA’s cost estimates is the closeness of the estimate to the negotiated final contract price. As figure 4.1 illustrates, there was a close match (within 15 percent) between EPA’s estimate and the final contract price for 18, or about half, of the assignments. (Percentages do not add up to 100 percent due to rounding.) For 11 assignments, EPA’s estimate was lower than the final price, and for 6 assignments, its estimate was higher. For 6 assignments, EPA overestimated the final price by 17 to 36 percent and a total of $769,000. For 11 assignments, EPA underestimated the final price by 15 to 101 percent and a total of about $2 million. EPA work assignment managers did not always document reasons for the differences, as EPA requires, even though comparing and documenting differences could identify problems with cost-estimating practices and alternatives for improvement. Of the 34 work assignment managers we interviewed, 15, or about 44 percent, said they lack sufficient experience to effectively and accurately develop estimates. As a result, these managers said, EPA’s estimators omit the costs of key work tasks, underestimate the experience and salary level of contractor personnel, and underestimate the extent to which subcontractors will be used. Six managers said they rely heavily on the contractors to determine what tasks should be included in a work assignment and how much the work should cost. Fifteen managers held the opinion that contractor personnel are better prepared and more qualified to estimate contract costs. One of the managers said that the contractor knows best and EPA will do whatever it takes to keep the contractor happy because the agency needs the contractor to perform the work. These attitudes raise questions about EPA’s willingness and ability to ensure that the agency is paying the best price for the work performed. These managers wanted more training on cleanups and cost estimating, as well as access to experienced estimators who could help the EPA managers improve their estimates. EPA’s internal reviews and our reports have also identified the need to adequately train regional contracting personnel as effective cost estimators and enhance their negotiation techniques.However, few of the managers we interviewed said they had received such training. Instead, most were using on-the-job experience to fill this training gap, but it was not effective because the managers develop only a few estimates each year. The director of EPA’s Office of Acquisition Management noted that in the past, some regions hired trained estimators to develop cost estimates but had to discontinue this practice because of budget cuts. To compensate for their lack of experience and training, several work assignment managers have worked as a team with both the EPA contract manager and the cleanup project manager to develop estimates for sites. Two of the three regions in our review had established such teams, and their estimates were closer to the final contract prices than the third region’s estimates. In addition, some regions have made arrangements with other collocated federal agencies, such as the Corps and the Department of the Interior’s Bureau of Reclamation, to have their work assignment managers seek assistance from staff at these agencies with experience in developing cost estimates. More widespread use of this resource by EPA regions could help managers gain the training and experience they need to improve the quality of their cost estimates. In addition to inexperience, all 34 work assignment managers cited a lack of access to historical site-specific cost data as a problem that adversely affected their ability to develop accurate cost estimates. As early as 1992, an EPA contract management task force determined that the absence of a database of historical information on the types of cleanup tasks conducted at similar sites and the associated costs of those tasks hampered cost estimators. The task force concluded that EPA should develop such a database, and EPA’s Office of Inspector General reiterated this conclusion in a 1997 study, recommending that the agency either develop such a database or obtain access to similar databases from the Corps or private agencies that conduct cleanups. To date, EPA has not established this database. When EPA awarded its Superfund contracts, beginning in 1995, it created a contract management information system. While the primary objective of the system was to collect the current data needed to monitor the Superfund program’s overall resources, the agency subsequently decided that the system could also serve as the historical cost database that estimators need. However, at the time of our review, EPA was testing the system and had not determined how estimators would use it. In addition, EPA does not plan to enter historical data into this database; instead, it plans to start collecting data when the database becomes operational. Consequently, it will take several years to gather enough baseline data to support cost estimates. Furthermore, several work assignment managers noted that the system is designed to collect only summary statistical cost data on the contracts and not the detailed site-specific data they need for their estimates. According to several regional contract management staff, they need both current and historical site-specific task and cost information to develop quality estimates. The director of EPA’s Office of Acquisition Management noted that a limited EPA analysis had indicated that it might be too costly to collect data at this level of detail. Corps Superfund program managers reinforced the need for historical site-specific task information to support cost estimates. The Corps includes such data in its cleanup database, but this database may not be complete enough to meet EPA’s needs. For example, the Corps primarily conducts construction activities at a cleanup, while EPA manages other types of activities, such as overseeing the cleanup. In addition, the Corps primarily uses fixed-price contracts for its cleanup work, so it is more certain of the tasks the contractor will conduct and the costs it might incur. EPA, on the other hand, has used primarily cost-reimbursable contracts for cleanups, so it is less certain of the tasks to be covered and the costs it will incur under such a contract. Nevertheless, the Corps managers believe a historical database would help EPA better manage these uncertainties and develop more accurate estimates. EPA has taken two actions to identify problems with its cost-estimating procedures and design corrective actions. First, in response to our reports and the Office of Inspector General’s findings, in fiscal year 1998, EPA declared Superfund contract management, including independent government cost estimates, as an agency-level weakness to address and established a workgroup to develop corrective action plans and milestones. The group also identified other steps EPA could take, including conducting more in-depth reviews of regions’ cost-estimating procedures, designing solutions to any problems identified, sharing any lessons learned from this review among the regions, and providing work assignment managers with more training. Second, to implement some of the corrective measures, EPA, in June 1998, entered into an agreement with the Corps to conduct reviews of the region’s cost-estimating practices and recommend potential improvements. The Corps plans to evaluate EPA’s cost-estimating policies and procedures, as well as the automated systems that could support cost estimating, and assess the extent to which EPA’s 10 regions are in compliance with these policies and procedures. As part of this effort, the Corps plans to determine the training needs of EPA’s contracting personnel. The Corps expects to submit a final report to EPA by early spring 1999, and EPA hopes to begin implementing any recommendations in September 1999. The agency is also waiting for the Corps’ report before it decides what types of historical information cost estimators need, whether and how to collect it, and how estimators can use it. While EPA has taken similar actions in the past, we continue to find the same problems with some estimates, demonstrating that the regions do not uniformly make improvements. According to the director of Superfund Programs, the regions operate autonomously and do not always implement headquarters’ directions in the same way. To illustrate, he pointed out that the newly developed Superfund contract information management system was created by the programmatic side of EPA, and now the contracting side of the agency is developing its own contract information management system. Because the two groups did not work together, the agency has to try to link the two systems. The Superfund Assistant Administrator also acknowledged that the regions may not sustain improvements in their estimating practices. When EPA replaced its expiring Superfund contracts with the Response Action Contracts it now uses for cleanup actions, it wanted to correct several contract deficiencies. In particular, it wanted to reduce both the number of contracts it awarded and the high program support costs it was paying to contractors for items such as managers’ salaries, rents, computers, telephones, and reports. In the past, EPA put too many contracts in place and did not have enough work to give all of the contractors. Even if the contractors were conducting relatively little cleanup work, they were continuing to incur monthly program support costs. As a result, a high percentage of the total contract costs was going to cover these administrative expenses rather than actual cleanup costs. Although EPA has awarded fewer new contracts, it may still have too many contracts in place compared with the current and projected future Superfund cleanup workload, and the program support costs for 10 of the 15 new contracts continue to be high. These concerns, however, may be only symptoms of more systemic questions about the ways in which EPA establishes contracts for Superfund work. EPA’s current “Contracts 2000” initiative may begin to address some of these questions. We are concerned, however, that EPA has not been able to provide documentation that clearly describes overall strategies and time frames for implementing changes from the initiative. At the time of our current review, EPA had awarded 15 of its new Response Action Contracts, valued at a total of more than $60 million. When EPA awards a contract, it specifies that the contractor will obtain up to a certain dollar amount of cleanup work over a given time period. As the contractor conducts work, it incurs costs—both direct costs that can be attributed to an individual site and indirect costs that are not site specific. EPA pays the contractor for both types of costs. EPA tracks the amount of non-site-specific costs it pays as a percentage, or rate, of the total contract costs that it covers. In the past, we have expressed concern that contractors’ program support costs, as a percentage of total contract costs, have been too high. Since the mid-1990s, EPA has used 11 percent as its target for program support costs. In our 1997 review, however, we found that the program support cost rates for expiring Superfund contracts ranged from 15 to 22 percent over the life of the contracts, in part because EPA did not control these costs in the early years of the contracts. We also reported that some of the new Response Action Contracts were continuing this pattern, with program support costs of 21 to 38 percent of total costs, making it more difficult for EPA to meet its target rate of no more than 11 percent over the life of these new contracts. In August 1998, we reported that EPA Superfund contractors were spending about 29 percent of their total contract costs for program support. During our current review, we found that the program support cost rates for a majority of the new contracts were still high. As of September 1998, EPA reported that the rates for only 5 of the 15 contracts were below EPA’s target of no more than 11 percent, ranging from about 7 to 10 percent. The rates for the remaining 10 contracts ranged from about 16 to 59 percent of total contract costs. One of the primary reasons for these high program support cost rates continues to be that EPA has too many contracts in place compared with the available cleanup workload. According to several EPA contracting officers, the agency expects such high rates for new contracts until the agency has had time to award enough work to all of the contractors. The officials predict that as EPA awards more work assignments, these program support cost rates should decrease. However, our prior work demonstrated that although EPA made this same prediction for its expired contracts, their rates remain high. When EPA began replacing its expiring contracts with new contracts in 1995, it had to decide how many contracts to award. In September 1992, it used the number of work assignments under its 45 expiring contracts to project the number of work assignments it would have in the future. Because the agency expected the number of work assignments to remain steady, it believed that if it reduced the number of contracts it awarded, it could give these contractors more work, and the program support cost rates would decrease. EPA decided to reduce the number of contracts from 45 to 22. The agency had determined that it should have at least two contracts in each region, and perhaps three in large regions so that, among other things, contractors would have to compete for work, helping to keep costs down. In reality, however, contractors do not compete for work assignments; rather, EPA regional contracting officials attempt to distribute the work assignments so that each contractor receives a fair share of the work. Subsequently, EPA decided to award only 19 of the 22 planned contracts–three regions will have only one contract—because it no longer thinks it will have the workload it originally predicted. However, EPA may still have more contracts in place than it needs. While uncertainty exists about how many sites will be included on the National Priorities List in the future, the agency has been listing fewer sites in recent years. For example, EPA proposed about 30 sites during fiscal year 1998, compared with an average of about 75 sites in earlier years. Thus, the likely number of cleanups will be significantly smaller than EPA originally estimated. Although EPA headquarters program managers have said that the agency hopes to add an average of about 40 new sites annually to the program beginning in fiscal year 1999, the four EPA regions with the highest Superfund workload indicated that, as the states take on greater cleanup responsibilities, fewer sites will enter the program. With fewer sites, contractors will have less work and EPA will have less chance to reduce its program support cost rate. EPA will soon have an opportunity to review the number of contracts it should have in place. EPA designed the current Superfund contracts to last 5 years, with an option to renew them for another 5 years. Several of the current contracts will soon be 5 years old, and EPA will have to determine whether to renew them. A representative from EPA’s Office of Acquisition Management said the office plans to consider a number of factors, including the uncertainty over the number of sites that will be placed on the National Priorities List, the contractor’s performance, and the Corps’ involvement in cleanups when determining which contract options to renew. In the past, EPA classified as program support costs the start-up costs that contractors incurred to prepare their personnel and administrative systems to perform the projects under their contract. These start-up costs are known as mobilization costs and are technically part of a contractor’s overhead costs. Under the new contracts, EPA excluded reporting these costs (a total of more than $1 million) in the program support category, because it viewed them as one-time costs that should be tracked separately. Nevertheless, these costs are program support costs, and when they are included, the 15 new contracts have average program support costs ranging from about 7 to 76 percent, rather than the 7 to 59 percent reported by EPA. Several senior managers in EPA’s Office of Acquisition Management agree that mobilization costs should be included in calculations of program support costs. As noted, the program support cost rate for 10 of EPA’s 15 new contracts exceeds EPA’s target rate of no more than 11 percent. The rates range from 16 to 76 percent, with a median of 28 percent and an average of 36 percent. In part because of concerns about contractors’ high program support costs, EPA has required the 15 contractors to provide more detailed breakdowns of their costs to help the agency better monitor and control costs. EPA has required the contractors to break down costs that it cannot assign to any one site, such as program support costs, into defined categories (e.g. program, administrative, and technical support) and track the costs by these categories. In implementing this requirement, EPA provided funds for the contractors to set up these categories and tracking mechanisms, and in doing so, took some actions that were inefficient and increased the support costs. First, rather than creating the software needed to set up and track the categories and providing it to each contractor, EPA paid each contractor to develop its own software. While EPA did not track the dollars devoted to developing software, senior officials in the Office of Emergency and Remedial Response told us that a substantial portion of the mobilization costs was devoted to this effort. Second, while several contractors have contracts in multiple locations, EPA typically paid each location to develop its own software, rather than just paying the parent contractor to develop one system and requiring the contractor to distribute the system. In some cases, however, EPA required the parent contractor to share the software it developed with its various regional offices. For example, Region III was able to save up to $90,000 on one contract by requiring that the parent contractor, which had received funds from another EPA region, to develop the software in that location, and in turn, provide the software to its office within Region III. While assessing EPA’s progress in correcting these past contract management problems, we determined that the problems may be symptoms of more systemic issues associated with EPA’s Superfund contracting practices. The problems raise the following questions: Could the agency more quickly and aggressively test and implement alternative types of contracts in addition to or instead of using cost-reimbursable contracts as it now does? Cost-reimbursable contracts, under which EPA agrees to pay all of a contrator’s allowable costs, place most of the financial risk on the government because the work that needs to be performed is, to varying degrees, uncertain. This uncertainty prevents EPA from accurately predicting the costs involved in performing the work. To a limited extent, EPA has effectively used fixed-price contracts for clearly defined and more routine cleanup actions. These contracts reduce the financial risk to the government because the parties agree on a price for the contractor’s activities and the contractor bears the risk of accomplishing the activities at this price. Because of its success to date, according to the director of EPA’s Office of Acquisition Management, the agency plans to use more fixed-price contracts in the future. Meanwhile, the Office of Management and Budget has also been urging EPA and other federal agencies to make a more concerted effort to use performance-based contracts. These contracts establish a price structure for a contractor’s services that rewards the contractor for superior performance, allowing the government to better ensure the receipt of high-quality goods and services at the best price. EPA has a few ongoing performance-based demonstration contracts that appear to be achieving positive results. Is it cost-effective for EPA to duplicate the infrastructure necessary to manage contracts in each of its 10 regional offices? Are there new and more effective ways to build more competition into EPA’s contracting process as a means to better control costs and ensure quality, such as competing each work assignment? Has EPA lowered its costs by using the Corps for more of its cleanup work, and, if so, how much of the cleanup workload should the Corps assume? Because the Corps specializes in and conducts a significant amount of construction contracting for the federal government, it may be better equipped than EPA to manage Superfund construction contracts. EPA’s Contracts 2000 initiative—an outgrowth of the Long-Term Contracting Strategy that the agency has been using to put in place necessary contracts as well as to assess and update its contract management practices—may address some, but not all, of these questions. EPA has identified various contracting issues, including the type and number of contracts used, that it needs to address. However, the initiative does not consider opportunities for making more use of competition and of the Corps, nor does it address the need for a contract management office in each of the 10 regions. Furthermore, EPA has not been able to provide us with documentation that clearly describes (1) the strategy for evaluating these areas and (2) the time frames for implementing the Contracts 2000 team’s decisions about improvements. As a result, we are concerned about whether EPA will move quickly enough before it extends the existing contracts for another 5 years and develops a strategy for ensuring that any changes become permanent. Our progress reviews over the years have consistently shown that without sustained high-level management attention, EPA has not always succeeded in implementing and sustaining past contracting reforms. EPA’s and DCAA’s progress in reducing the contract audit backlog will reduce the government’s risk of contractor fraud, waste, and abuse of Superfund dollars. EPA’s more frequent use of independent government estimates to negotiate the final prices for contracted work should help to ensure that the government gets the best possible prices for this work. However, until EPA addresses its cost estimators’ lack of experience and training in developing estimates, the government is at risk of paying too much for some cleanup work. Enlisting the Corps to assess the EPA regions’ cost-estimating practices and data needs and to recommend training and other improvements has the potential to correct the recurring problems that we find. Sustaining such improvements in the regions over the long term has also been a problem. Unless EPA establishes some system to monitor the regions’ implementation of such changes—by, for example, routinely testing the regions’ cost-estimating and price-negotiating practices during formal regional reviews—we may continue to find problems. Also, cost estimators still do not have access to historical site-specific cost data, and until they do, they cannot generate the most accurate estimates possible. Because EPA’s contract management information system most likely will not provide the detailed historical site-specific cost data estimators say they need and will not be available in the near future, the agency will have to consider other cost-effective alternatives for providing these data. To generate estimates, broader use of regional teams that include the cost estimator, contract manager, and program manager for a site, as well as access to experienced estimators and historical databases provided by other agencies within a region’s geographical area, could be effective interim measures that the agency could take. The agency has also taken important steps to reduce the program support costs that it pays contractors, particularly reducing by more than half the number of contracts that it has in place. However, without taking additional steps, such as deciding not to renew some contracts because the contractors have performed poorly or not enough work is available for the remaining contractors, the agency will continue to pay these high administrative expenses, making less funding available for cleanup. Finally, EPA’s Contracts 2000 initiative offers the agency the opportunity to assess and improve its overall contracting practices, allowing it to make wider use of the Corps in cleanup work and enter into more fixed-price or performance-based contracts. However, without an implementation strategy with milestones to make needed improvements agencywide, EPA will not establish and sustain better contracting practices. To build on EPA’s momentum to address the contract management concerns we have identified, we recommend that the Administrator, EPA, instruct the director of the Office of Acquisition Management to work with the Assistant Administrator for Solid Waste and Emergency Response to develop procedures to ensure that the corrective actions EPA implements in response to recommended actions from the Corps result in improved cost estimates; periodically review whether the regions have consistently implemented the corrective actions; identify a cost-effective method of providing estimators with access to the detailed historical site-specific cost data they need to generate more accurate estimates; complete a review of the number of contracts the agency needs to keep in place, given the future cleanup workload, and do so before it loses the opportunity to close out some of the contracts whose base periods are expiring, allowing the agency to choose whether to exercise its option to renew these contracts for another 5 years; and ensure that the Contracts 2000 initiative results in a comprehensive strategy, with specific tasks and milestones for their completion, for improving the agency’s contract management practices. We met with officials from EPA’s Office of Acquisition Management and Office of Solid Waste and Emergency Response, including the director of Superfund Programs and the director of Contract Management, who generally agreed with the basic findings and recommendations of the report. The agency provided various clarifying and technical corrections, which we incorporated in the report as appropriate. EPA agreed with our recommendations to periodically review the regions’ implementation of cost-estimating corrective actions and noted that teams from both the Office of Solid Waste and Emergency Response and the Office of Acquisition Management could monitor implementation during their regional reviews. In response to our recommendation to use historical data to improve cost estimates, EPA expressed concern that some estimators relied too much on outdated historical data, leading to inaccurate estimates. The agency stated that it was more critical to focus on helping estimators learn to develop a more detailed breakdown of site-specific tasks and activities to be conducted and to cost out these activities, rather than spending the resources to build a nationwide database. We agree that EPA estimators need to develop detailed site-specific tasks to improve their estimates because our work demonstrated that they often leave out key steps when developing their estimates, as we point out in this report. However, because the estimators themselves and the Corps identified historical data as a critical component for accurate cost estimating, we continue to call on the agency to also provide historical data that estimators can use as a baseline to cost out these specific tasks once estimators have developed them. Furthermore, our recommendation calls on the agency to identify a cost-effective method for providing these data but does not prescribe that the agency build a nationwide database. In regard to our concerns about contractors’ high program support costs, EPA recommended that we exclude two relatively new Superfund contracts from our analyses of program support costs because the agency has not had enough time to assign work to these contracts. Including such contracts makes program support costs high as a percentage of cleanup costs. EPA expects these percentages to decrease over time as contractors obtain work assignments. We did not adjust our program support cost analyses in response to EPA’s comments but did note the agency’s point about new contracts in our report. We believe it is critical that EPA seek to reduce contractors’ program support costs from the beginning of a contract. As our 1997 report demonstrated, EPA was not able to meet its target of 11 percent for many of the expiring Superfund contracts, in part because the percentage of program support costs was so high for contracts in the early stages and EPA did not have enough cleanup work to award to contractors to decrease these costs over time. As we have noted in this report, the likely number of future cleanups could be significantly smaller than the number EPA originally estimated, making it difficult to reduce the program support cost percentages for its current contracts over time. Finally, the agency agreed with our recommendation that it use its Contracts 2000 initiative to improve contract management and provided examples of various efforts the agency has undertaken. These included (1) exploring the use of different types of contracts, (2) having each region use a performance-based contract for a pilot Superfund cleanup project, and (3) evaluating contractors’ performance before assigning them work.
|
Pursuant to a congressional request, GAO provided information on: (1) the efforts that the Environmental Protection Agency (EPA) and the other federal agencies with major cleanup responsibilities have made to set priorities for spending limited cleanup funds at the hazardous waste sites posing the highest risks to human health and the environment; (2) EPA's actions to recover its expenditures for cleanups from the parties that are legally liable for the contamination; and (3) EPA's efforts to better control contractors' cleanup costs. GAO noted that: (1) for several years, GAO has included the Superfund program on its list of federal programs that pose significant financial risk to the government and potential for waste and abuse; (2) agencies have corrected some of these problems, but those that remain are important enough to prevent GAO from removing Superfund from the high-risk list; (3) 4 of the 5 agencies GAO reviewed--EPA, the Department of Agriculture, the Department of Defense, and the Department of Energy (DOE)--are setting cleanup priorities on the basis of the relative risk that sites pose to human health and the environment; (4) EPA, Agriculture, and Defense set nationwide priorities for most of their sites; (5) however, EPA may not know about all high-risk sites because states are taking on more cleanups and deciding, often on the basis of factors other than risk, which sites to refer to EPA for possible listing; (6) each DOE facility considers risk and other factors when setting priorities among its competing environmental management projects; (7) however, cleanups at one facility do not compete with those at another facility on a nationwide basis; (8) the Bureau of Land Management has not set nationwide cleanup priorities because it has not yet developed an overall cleanup strategy or an inventory of its hazardous waste sites, estimated to cost billions of dollars to address; (9) although EPA has succeeded in getting responsible parties to conduct 70 percent of long-term Superfund cleanups, it has been less successful in recovering its costs from responsible parties when it conducts a cleanup; (10) EPA has lost the opportunity to collect almost $2 billion it spent on cleaning up sites since the program began because it excluded large portions of its indirect costs when it calculated what costs to assess parties; (11) while EPA has developed a new method of calculating these costs that could increase their recovery, the agency has not implemented it; (12) EPA has eliminated almost all of its backlog of 500 required Superfund contract audits, and is trying to complete the new audits on time; (13) however, some of EPA's actions have been slow and have not gone far enough to address GAO's concerns that the agency was not using its own estimates of what contract work should cost to negotiate the best contract price for the government or to control contractors' program support costs; and (14) less money is going toward the actual cleanup of high-risk sites, and excessive amounts are still being spent on administrative support costs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.